Don't you hate it when the source data is so inconsistent?
The best you can do with this is to import it into a single field (say
FullName), and then use a series of Update queries to populate the actual
fields.
By examining the data, you have to make a series of assumptions. For
example: if the field contains a comma, that everything before that is the
surname. So, under the FullName field, you enter criteria:
Like "?*,*"
Then in a fresh column in the Field row:
Trim(Left([FullName], Instr([FullName], ",") - 1))
and in the next column in the Field row:
Trim(Mid([FullName], Instr([FullName], ",") + 1))
Check this gives you sensible results.
Then change the query to an Update query (Update on Query menu.)
Move the expressions from the Field row into the Update row under the
lastName and firstName fields respectively.
Now add criteria under lastName and FirstName of:
Is Null
so that subsequent operations don't overwrite these fields.
Then you're off trying to create expressions that solve the next bunch of
operations. InstrRev() is useful for finding the *last* slash or space in
the name. Right() and Len() will be useful. Most expressions need Trim() so
you don't get leading or trailng spaces.
You can also parse a particular word from the field using a custom function
like this:
ParseWord(): Parses the first, last, or n-th word/item from a field/list
at:
http://allenbrowne.com/func-10.html