Running action function from SELECT query

B

Brian

I have an in-house app for which another developer is writing an ASP.NET
proxy to pass requests in from the web via XML. His intention is to keep the
proxy very generic, passing in just a query name with arguments, receiving
the results of the query, and passing those results back to the web. This
will work fine for retrieving information via SELECT queries, but it leaves
two questions:

1. When he passes in the name of and arguments for an insert or update
query, is there any way to return any type of results?

2. My app already uses an API to interface with a credit card processor.
Rather than re-inventing the wheel and having the website manage the
customer-to-processor traffic, our intention is to have the customer complete
the information online, then pass that through the proxy to my DB, where I
will pass it along to the processor.

My quandary is this (and I admit, it be more of an ASP.NET question than
Access): is it safe/workable to include a call to the function that runs the
entire CC process in a SELECT query, returning just the CC processing result.
The function does this:

a. Creates entries in a couple of tables to record the attempted
transaction, whether successful or not.
b. Uses the processor's API to process the charge.
c. Record the results (approve/decline, approval code, etc.)

The function works perfectly in my front-end app when called by a local
user; I am just trying to find a way to expose it to the web proxy and think
I may be able to call the function from a SELECT query, returning the
function's value (approve/decline information) as the query results.

I have seen strange behavior before when calling action functions such as
this in SELECT queries, such as single records triggering multiple results or
records being missed. If that is not workable, is it possible to call a
function in my MDB directly from ASP.NET?
 
D

David H

I can't answer every thing, but I can get you started...

Brian said:
I have an in-house app for which another developer is writing an ASP.NET
proxy to pass requests in from the web via XML. His intention is to keep the
proxy very generic, passing in just a query name with arguments, receiving
the results of the query, and passing those results back to the web. This
will work fine for retrieving information via SELECT queries, but it leaves
two questions:

1. When he passes in the name of and arguments for an insert or update
query, is there any way to return any type of results?
Not intrinsically. INSERT, UPDATE, DELETE queries do not return any records.
They simply act on the underlying table. SELECT is the only statement that
returns records for whatever use. If you're executing the statement via DAO
or ADO, you can use the .RecordsAffected property to return a count of the
records impacted, but you can't actually return the records. If you need to
return the records after an INSERT/UPDATE (or to confirm a DELETE), you could
do so by marking the records with a BATCH number - a number that unique
identifies the records INSERTED or UPDATED. On the INSERT/UPDATE runs, you
would then use a SELECT with the BATCH number in the WHERE statement. The
BATCH number can be any number that could be uniquely created such as the
current date/time converted to number as in 11/29/2008 9:11:09 PM becomes
39781.8827314815. Obviously, you'd need to a record to the underlying table
to capture the batch number.

2. My app already uses an API to interface with a credit card processor.
Rather than re-inventing the wheel and having the website manage the
customer-to-processor traffic, our intention is to have the customer complete
the information online, then pass that through the proxy to my DB, where I
will pass it along to the processor.

My quandary is this (and I admit, it be more of an ASP.NET question than
Access): is it safe/workable to include a call to the function that runs the
entire CC process in a SELECT query, returning just the CC processing result.
You could conceivable call it by executing a SELECT query that contains a
FUNCTION that itself calls the API. I have done something similar but the
query has always performed a specific function along with calling the
FUNCTION. I don't have any experience with wrapping the function in a SELECT
for sole sake of executing the function and thus the API where the SELECT
doesn't actually return any rows. Which leads me to this...

It sounds like you'd be better off having a front end sitting on the web
server and then opening it using automation to call and execute the API. Now
that I think about it you'd pretty much have to use Automation anyways in
order to execute the SELECT query - the web page has to connect either way.
Along with that, to my knowledge you can't use XML to execute a query in
Access unless you setup Access to be checking for the existance of the XML
file, import it, process it and run the query. Access would have to be
running on the server which brings me back to the idea that you should use
Automation. I am a big fan of generic programming and flexibility, however
given the limitations of Access you can't just have Access run a query
without Access running.

The function does this:

a. Creates entries in a couple of tables to record the attempted
transaction, whether successful or not.
b. Uses the processor's API to process the charge.
c. Record the results (approve/decline, approval code, etc.)

The function works perfectly in my front-end app when called by a local
user; I am just trying to find a way to expose it to the web proxy and think
I may be able to call the function from a SELECT query, returning the
function's value (approve/decline information) as the query results.

I have seen strange behavior before when calling action functions such as
this in SELECT queries, such as single records triggering multiple results or
records being missed. If that is not workable, is it possible to call a
function in my MDB directly from ASP.NET?

Not directly per se, but if you have Access on the web server ASP.NET should
be able to access the function using automation.
 
D

David H

Of course all that being said, there can be issues with automating Office
from ASP.NET. You might honestly be better off rebuilding the API native to
ASP.NET, the advantage being that you lay the groundwork to move from an
Access FE toan ASP.NET solution.

Google - Automating office using ASP.NET.
 
B

Brian

I mistyped on thing: the proxy uses ADO.NET, not ASP.NET. Given that I have
spent about 200 hours developing the front end in Access over the last couple
of years, it is still very dynamic (i.e. ongoing development), and I do not
have time to pick up another programming platform, it is highly unlikely that
we will be migrating the FE away from Access any time soon.

Part of the issue here is that the Access DB is stored (BE) & run (FE)
locally on the LAN, but the web server is external, so we have to go through
a firewall. The proxy sits inside the firewall, and the web will communicate
to it through a forwarded port on the firewall.

I do not know how ADO.NET works, but the developer tells me he is able to
call queries directly from my FE and that he has that part working. I just
figured that if he could call a query, he may be able to also call a function.

At any rate, I can call just a function from a query, like this:

SELECT FunctionName([Arg1],[Arg2],[Arg3]) AS FunctionResult
FROM Version;

Without including a table name, I cannot do it, so I included a table that
has just one record but include no results from that table. The function can
return results to the query. My larger concern, however, is that when I
previously tried calling an active function (i.e. one that included SQL
INSERT statements) from a SELECT query, I got duplicate inserted records and
vast quantities of missing records, so I am not confident that the function
will not misfire in the same way here. It may be completely different in that
there is only a single record involved here, but I would hate to find out the
hard way - my function is a credit card processing function, and duplication
would mean multiple charges to the same card.
 
D

David H

That changes a lot. Yes ADO can execute queries saved in an Access database.
(More below especially my final comment)

Brian said:
I mistyped on thing: the proxy uses ADO.NET, not ASP.NET. Given that I have
spent about 200 hours developing the front end in Access over the last couple
of years, it is still very dynamic (i.e. ongoing development), and I do not
have time to pick up another programming platform, it is highly unlikely that
we will be migrating the FE away from Access any time soon.

Part of the issue here is that the Access DB is stored (BE) & run (FE)
locally on the LAN, but the web server is external, so we have to go through
BTW - The FE should always be run on the local machines. I actually use a
VBScript to pull a copy of the FE down to the local machine and start Access
to avoid any distribution issues.
a firewall. The proxy sits inside the firewall, and the web will communicate
to it through a forwarded port on the firewall.

I do not know how ADO.NET works, but the developer tells me he is able to
call queries directly from my FE and that he has that part working.
ADO is just another way of connecting to the database. If you're familar
with DAO, ADO was initially designed to replace it. Same basic principle
between the two.

I just
figured that if he could call a query, he may be able to also call a function.
Not to my knowledge. ADO is a means of working with the data in a table. It
does not directly expose any of the VBA modules in an Access database unless
you use the function as an expression in the query. (Again, I've done this
before but in conjunction with inserting/deleteing/updating live data, not as
a workaround.)
At any rate, I can call just a function from a query, like this:

SELECT FunctionName([Arg1],[Arg2],[Arg3]) AS FunctionResult
FROM Version;

Without including a table name, I cannot do it, so I included a table that
has just one record but include no results from that table.
Yes that should work, the MVP's would probably have more feedback on that as
it doesn't strike me as a best practice.

The function can
return results to the query. My larger concern, however, is that when I
previously tried calling an active function (i.e. one that included SQL
INSERT statements) from a SELECT query, I got duplicate inserted records and
vast quantities of missing records, so I am not confident that the function
will not misfire in the same way here.
FYI - The ability to imbed an Action query in a SELECT statement is the
basis of SQL injection attacks. I would hope that your WEB guy is at least
aware of this. IT SHOULD NOT IMPACT WHAT YOU"RE TRYING HERE, BUT I"D CONSIDER
IT JUST ANY CASE.
http://en.wikipedia.org/wiki/SQL_injection

If the function encapsulates an ACTION query, the action query should run as
in 'SELECT myFunctionName as EXPR1 from myTableName'. How the function
operates is independent of the manner in which its called. If you're select
query returned duplicate records then yes - you have duplicate inserted
records as the function would have been executed for each specific record
returned by the SELECT statement. As to why you ended up with missing
records, i can't comment, BUT if you're function requires parameters which
are then passed to the function, then that's probably where some of the
funkiness came into play - especially if the parameters were derived from
fields returned by the SELECT query. In other words, the SELECT query may not
have SELECTed the records that you were expecting OR the values returned
provided adverse results when passed to the function.

It may be completely different in that
there is only a single record involved here, but I would hate to find out the
hard way - my function is a credit card processing function, and duplication
would mean multiple charges to the same card.

In ALL honesty, I would give the web developer the code that you have and
have him adapt it to run under ASP.NET as it will eliminate a great deal of
the hassel that your experiencing. Not to mention that it'd be more
consistent with sound development practices. In short, you're trying to do
something that Access was never designed to do.

I worked for Walt Disney World for 9 years in resorts and I'm quite familar
with charging credit cards twice courtesy of the ROOM & TAX CALCULATOR
project - it took a couple of weeks before the problem was identified. In
principle, the developer tried exactly what your attempting - trying to reuse
functionality in a manner that it was never designed for. In the end, it was
written from scratch based on existing functionality.
 
B

Brian

Thank you for your response. I think the security will be OK, because the
proxy will accept requests only from the IP of the website, through a
specific non-standard firewall port, and the SQL statements will only be sent
as a direct consequence of user input after logon to the website. I will make
sure the web developer addresses the security concerns specifically, however.

There is a copy of the FE (.mde) on each PC. The BE (.mdb) is on the server
inside the LAN, accessed via a mapped drive. There are only seven stations,
this is still under ongoing development, and I manage the also network, so I
have not automated distribution - I just push FE updates out via a batch file
when users are off the system. Automation & version control to follow when I
have time between other projects.

My client still needs the ability to process CC charges in-house, regardless
of whether we expose that funcitonality online, since the hub of this
particular business is and will likely always be the in-house staff & app and
not the web. It is a local fresh organic produce-delivery business whose
personal customer relationship & logistical considerations are so dynamic
that the web will not likely ever supplant the local interface as the key
means of input.

I agree that my attempt to figure this out using a function embedded in a
SELECT query does not sound like best practice; however, if I abandon the
concept of processing web-initiated charges in-house, I must of necessity
instead provide a method for the website to transmit the results of the
charge attempt to my in-house app for storage. This involves creation of a
batch header in one table, detail record in another, including charge
information and basic results of the transaction, and any associated errors
into yet another table, then posting successful transactions into a live
Payment table to affect customer balance. It seems to me that it could be
more difficult having the web hand across results that affect three different
tables than it would be to have the web hand off a single set of arguments
and using my existing processes to create all the records.

I could easily just revert to a CSV-file transfer and have the web app drop
the file, then use a form timer to import the data. But that removes the
realtime response required to give the web user an indication of the success
or failure of the charge attempt.

My example of a function call gone awry was a SELECT query which ran just
fine as a standalone - it pulled all 2800 records. I then encapsulated a
function call (as an additional expression in that same query) that did this
(which is what I was really after):

1. Split a semicolon-delimited field in the SELECT source into its
individual elements
2. Looped through the resultant array.
3. Create & executed (CurrentDB.Execute) a SQL INSERT into another table for
each member of the split array, pairing it with another field from the
original table involved in the SELECT query.

This duplicated several and skipped many more. I eventually just opened a
recordset, looped through it, and called the function on each record, and
that worked perfectly. I would have done the loop at the beginning, but this
was all done in an attempt to answer another forum user's question of how to
perform the split, and I was trying to avoid explaining the process of
looping through a recordset without having access to a live copy of the DB.
As soon as I had access to a full copy of the DB by e-mail, I did the
recordset/loop, with perfect success.

Strangely, the function worked perfectly on any single example, however, so
I suspected that shoving 2800 records through a SELECT query that, in turn,
did all the above, was somehow not processing each record consecutively.
Admittedly, there is a lot I do not know here.

David H said:
That changes a lot. Yes ADO can execute queries saved in an Access database.
(More below especially my final comment)

Brian said:
I mistyped on thing: the proxy uses ADO.NET, not ASP.NET. Given that I have
spent about 200 hours developing the front end in Access over the last couple
of years, it is still very dynamic (i.e. ongoing development), and I do not
have time to pick up another programming platform, it is highly unlikely that
we will be migrating the FE away from Access any time soon.

Part of the issue here is that the Access DB is stored (BE) & run (FE)
locally on the LAN, but the web server is external, so we have to go through
BTW - The FE should always be run on the local machines. I actually use a
VBScript to pull a copy of the FE down to the local machine and start Access
to avoid any distribution issues.
a firewall. The proxy sits inside the firewall, and the web will communicate
to it through a forwarded port on the firewall.

I do not know how ADO.NET works, but the developer tells me he is able to
call queries directly from my FE and that he has that part working.
ADO is just another way of connecting to the database. If you're familar
with DAO, ADO was initially designed to replace it. Same basic principle
between the two.

I just
figured that if he could call a query, he may be able to also call a function.
Not to my knowledge. ADO is a means of working with the data in a table. It
does not directly expose any of the VBA modules in an Access database unless
you use the function as an expression in the query. (Again, I've done this
before but in conjunction with inserting/deleteing/updating live data, not as
a workaround.)
At any rate, I can call just a function from a query, like this:

SELECT FunctionName([Arg1],[Arg2],[Arg3]) AS FunctionResult
FROM Version;

Without including a table name, I cannot do it, so I included a table that
has just one record but include no results from that table.
Yes that should work, the MVP's would probably have more feedback on that as
it doesn't strike me as a best practice.

The function can
return results to the query. My larger concern, however, is that when I
previously tried calling an active function (i.e. one that included SQL
INSERT statements) from a SELECT query, I got duplicate inserted records and
vast quantities of missing records, so I am not confident that the function
will not misfire in the same way here.
FYI - The ability to imbed an Action query in a SELECT statement is the
basis of SQL injection attacks. I would hope that your WEB guy is at least
aware of this. IT SHOULD NOT IMPACT WHAT YOU"RE TRYING HERE, BUT I"D CONSIDER
IT JUST ANY CASE.
http://en.wikipedia.org/wiki/SQL_injection

If the function encapsulates an ACTION query, the action query should run as
in 'SELECT myFunctionName as EXPR1 from myTableName'. How the function
operates is independent of the manner in which its called. If you're select
query returned duplicate records then yes - you have duplicate inserted
records as the function would have been executed for each specific record
returned by the SELECT statement. As to why you ended up with missing
records, i can't comment, BUT if you're function requires parameters which
are then passed to the function, then that's probably where some of the
funkiness came into play - especially if the parameters were derived from
fields returned by the SELECT query. In other words, the SELECT query may not
have SELECTed the records that you were expecting OR the values returned
provided adverse results when passed to the function.

It may be completely different in that
there is only a single record involved here, but I would hate to find out the
hard way - my function is a credit card processing function, and duplication
would mean multiple charges to the same card.

In ALL honesty, I would give the web developer the code that you have and
have him adapt it to run under ASP.NET as it will eliminate a great deal of
the hassel that your experiencing. Not to mention that it'd be more
consistent with sound development practices. In short, you're trying to do
something that Access was never designed to do.

I worked for Walt Disney World for 9 years in resorts and I'm quite familar
with charging credit cards twice courtesy of the ROOM & TAX CALCULATOR
project - it took a couple of weeks before the problem was identified. In
principle, the developer tried exactly what your attempting - trying to reuse
functionality in a manner that it was never designed for. In the end, it was
written from scratch based on existing functionality.
 
D

David H

1) I *strongly* recommend not trying to hack what you have to provide credit
card processing. As one of my professors preached, "the true cost of software
development is determined by how much money it takes to fix a problem". If
you are able to hack things and make it work, what will happen when your
ASP.NET guy is long gone and you've been laid off or have left the company?
Its in the best long term interests of your organization to keep things as
simple and as straight-forward as humanly possibly so that if someone who
isn't the least familar with the application comes in he/she can quickly and
easily understand how things work.

2) A webapp hitting more than one table isn't any more difficult than
hitting just one table. All your developer might have to do is factor in
additonal code to check that the queries succeeded. I would comfortably guess
that 90% of the webapps out there from big to small hit more than 1 table.

3) From a design, implementation and debugging standpoint, I would have your
ASP.NET developer adapt your existing CC processing code to ASP.NET. It will
take additional time, but will you save you headaches in the long run as it
would present a design that's logical and consistent from start to finish. It
may seem more complicated than the route that you'd like but you have to way
the pros and the cons of both. Yes, simply passing in a single query is
easier on the website side, but much more complicated on the database side.
Yes, porting the credit card processing code to ASP.NET is more complicated
upfront, but it makes integration with the database much, much cleaner, not
to mention maintenance down the road.

4) All of that being said, you could go the route of NOT providing realtime
credit card processing by simply using the website to capture the order and
payment information and then have an inhouse person actually process the
charge. My organization has done that for four(?) years now. Granted, our
industry is such that delyaing the transaction by 24 - 48 hours doesn't
impact the business as there's typically a full 4-6 weeks between order
placement, fullfillment and billing. If you are wanting to go this route, you
could always set up a means by which the front end watches for new orders and
automatically charges the card. This would be a matter of setting up a hidden
form and in the onTimer event requery the database for new orders and then
automatically submit the payment. Once the result is returned, the database
could then send out an automated email confirming payment (or that the credit
card was declined). You could setup the onTimer_Procedure to fire every 15
mintues or so to keep charging the card as close to the transaction as
possible.

Brian said:
Thank you for your response. I think the security will be OK, because the
proxy will accept requests only from the IP of the website, through a
specific non-standard firewall port, and the SQL statements will only be sent
as a direct consequence of user input after logon to the website. I will make
sure the web developer addresses the security concerns specifically, however.

There is a copy of the FE (.mde) on each PC. The BE (.mdb) is on the server
inside the LAN, accessed via a mapped drive. There are only seven stations,
this is still under ongoing development, and I manage the also network, so I
have not automated distribution - I just push FE updates out via a batch file
when users are off the system. Automation & version control to follow when I
have time between other projects.

My client still needs the ability to process CC charges in-house, regardless
of whether we expose that funcitonality online, since the hub of this
particular business is and will likely always be the in-house staff & app and
not the web. It is a local fresh organic produce-delivery business whose
personal customer relationship & logistical considerations are so dynamic
that the web will not likely ever supplant the local interface as the key
means of input.

I agree that my attempt to figure this out using a function embedded in a
SELECT query does not sound like best practice; however, if I abandon the
concept of processing web-initiated charges in-house, I must of necessity
instead provide a method for the website to transmit the results of the
charge attempt to my in-house app for storage. This involves creation of a
batch header in one table, detail record in another, including charge
information and basic results of the transaction, and any associated errors
into yet another table, then posting successful transactions into a live
Payment table to affect customer balance. It seems to me that it could be
more difficult having the web hand across results that affect three different
tables than it would be to have the web hand off a single set of arguments
and using my existing processes to create all the records.

I could easily just revert to a CSV-file transfer and have the web app drop
the file, then use a form timer to import the data. But that removes the
realtime response required to give the web user an indication of the success
or failure of the charge attempt.

My example of a function call gone awry was a SELECT query which ran just
fine as a standalone - it pulled all 2800 records. I then encapsulated a
function call (as an additional expression in that same query) that did this
(which is what I was really after):

1. Split a semicolon-delimited field in the SELECT source into its
individual elements
2. Looped through the resultant array.
3. Create & executed (CurrentDB.Execute) a SQL INSERT into another table for
each member of the split array, pairing it with another field from the
original table involved in the SELECT query.

This duplicated several and skipped many more. I eventually just opened a
recordset, looped through it, and called the function on each record, and
that worked perfectly. I would have done the loop at the beginning, but this
was all done in an attempt to answer another forum user's question of how to
perform the split, and I was trying to avoid explaining the process of
looping through a recordset without having access to a live copy of the DB.
As soon as I had access to a full copy of the DB by e-mail, I did the
recordset/loop, with perfect success.

Strangely, the function worked perfectly on any single example, however, so
I suspected that shoving 2800 records through a SELECT query that, in turn,
did all the above, was somehow not processing each record consecutively.
Admittedly, there is a lot I do not know here.

David H said:
That changes a lot. Yes ADO can execute queries saved in an Access database.
(More below especially my final comment)

Brian said:
I mistyped on thing: the proxy uses ADO.NET, not ASP.NET. Given that I have
spent about 200 hours developing the front end in Access over the last couple
of years, it is still very dynamic (i.e. ongoing development), and I do not
have time to pick up another programming platform, it is highly unlikely that
we will be migrating the FE away from Access any time soon.

Part of the issue here is that the Access DB is stored (BE) & run (FE)
locally on the LAN, but the web server is external, so we have to go through
BTW - The FE should always be run on the local machines. I actually use a
VBScript to pull a copy of the FE down to the local machine and start Access
to avoid any distribution issues.
a firewall. The proxy sits inside the firewall, and the web will communicate
to it through a forwarded port on the firewall.

I do not know how ADO.NET works, but the developer tells me he is able to
call queries directly from my FE and that he has that part working.
ADO is just another way of connecting to the database. If you're familar
with DAO, ADO was initially designed to replace it. Same basic principle
between the two.

I just
figured that if he could call a query, he may be able to also call a function.
Not to my knowledge. ADO is a means of working with the data in a table. It
does not directly expose any of the VBA modules in an Access database unless
you use the function as an expression in the query. (Again, I've done this
before but in conjunction with inserting/deleteing/updating live data, not as
a workaround.)
At any rate, I can call just a function from a query, like this:

SELECT FunctionName([Arg1],[Arg2],[Arg3]) AS FunctionResult
FROM Version;

Without including a table name, I cannot do it, so I included a table that
has just one record but include no results from that table.
Yes that should work, the MVP's would probably have more feedback on that as
it doesn't strike me as a best practice.

The function can
return results to the query. My larger concern, however, is that when I
previously tried calling an active function (i.e. one that included SQL
INSERT statements) from a SELECT query, I got duplicate inserted records and
vast quantities of missing records, so I am not confident that the function
will not misfire in the same way here.
FYI - The ability to imbed an Action query in a SELECT statement is the
basis of SQL injection attacks. I would hope that your WEB guy is at least
aware of this. IT SHOULD NOT IMPACT WHAT YOU"RE TRYING HERE, BUT I"D CONSIDER
IT JUST ANY CASE.
http://en.wikipedia.org/wiki/SQL_injection

If the function encapsulates an ACTION query, the action query should run as
in 'SELECT myFunctionName as EXPR1 from myTableName'. How the function
operates is independent of the manner in which its called. If you're select
query returned duplicate records then yes - you have duplicate inserted
records as the function would have been executed for each specific record
returned by the SELECT statement. As to why you ended up with missing
records, i can't comment, BUT if you're function requires parameters which
are then passed to the function, then that's probably where some of the
funkiness came into play - especially if the parameters were derived from
fields returned by the SELECT query. In other words, the SELECT query may not
have SELECTed the records that you were expecting OR the values returned
provided adverse results when passed to the function.

It may be completely different in that
there is only a single record involved here, but I would hate to find out the
hard way - my function is a credit card processing function, and duplication
would mean multiple charges to the same card.

In ALL honesty, I would give the web developer the code that you have and
have him adapt it to run under ASP.NET as it will eliminate a great deal of
the hassel that your experiencing. Not to mention that it'd be more
consistent with sound development practices. In short, you're trying to do
something that Access was never designed to do.

I worked for Walt Disney World for 9 years in resorts and I'm quite familar
with charging credit cards twice courtesy of the ROOM & TAX CALCULATOR
project - it took a couple of weeks before the problem was identified. In
principle, the developer tried exactly what your attempting - trying to reuse
functionality in a manner that it was never designed for. In the end, it was
written from scratch based on existing functionality.
:

I can't answer every thing, but I can get you started...

:

I have an in-house app for which another developer is writing an ASP.NET
proxy to pass requests in from the web via XML. His intention is to keep the
proxy very generic, passing in just a query name with arguments, receiving
the results of the query, and passing those results back to the web. This
will work fine for retrieving information via SELECT queries, but it leaves
two questions:

1. When he passes in the name of and arguments for an insert or update
query, is there any way to return any type of results?
Not intrinsically. INSERT, UPDATE, DELETE queries do not return any records.
They simply act on the underlying table. SELECT is the only statement that
returns records for whatever use. If you're executing the statement via DAO
or ADO, you can use the .RecordsAffected property to return a count of the
records impacted, but you can't actually return the records. If you need to
return the records after an INSERT/UPDATE (or to confirm a DELETE), you could
do so by marking the records with a BATCH number - a number that unique
identifies the records INSERTED or UPDATED. On the INSERT/UPDATE runs, you
would then use a SELECT with the BATCH number in the WHERE statement. The
BATCH number can be any number that could be uniquely created such as the
current date/time converted to number as in 11/29/2008 9:11:09 PM becomes
39781.8827314815. Obviously, you'd need to a record to the underlying table
to capture the batch number.



2. My app already uses an API to interface with a credit card processor.
Rather than re-inventing the wheel and having the website manage the
customer-to-processor traffic, our intention is to have the customer complete
the information online, then pass that through the proxy to my DB, where I
will pass it along to the processor.

My quandary is this (and I admit, it be more of an ASP.NET question than
Access): is it safe/workable to include a call to the function that runs the
entire CC process in a SELECT query, returning just the CC processing result.
You could conceivable call it by executing a SELECT query that contains a
FUNCTION that itself calls the API. I have done something similar but the
query has always performed a specific function along with calling the
FUNCTION. I don't have any experience with wrapping the function in a SELECT
for sole sake of executing the function and thus the API where the SELECT
doesn't actually return any rows. Which leads me to this...

It sounds like you'd be better off having a front end sitting on the web
server and then opening it using automation to call and execute the API. Now
that I think about it you'd pretty much have to use Automation anyways in
order to execute the SELECT query - the web page has to connect either way.
Along with that, to my knowledge you can't use XML to execute a query in
Access unless you setup Access to be checking for the existance of the XML
file, import it, process it and run the query. Access would have to be
running on the server which brings me back to the idea that you should use
Automation. I am a big fan of generic programming and flexibility, however
given the limitations of Access you can't just have Access run a query
without Access running.


The function does this:

a. Creates entries in a couple of tables to record the attempted
transaction, whether successful or not.
b. Uses the processor's API to process the charge.
c. Record the results (approve/decline, approval code, etc.)

The function works perfectly in my front-end app when called by a local
user; I am just trying to find a way to expose it to the web proxy and think
I may be able to call the function from a SELECT query, returning the
function's value (approve/decline information) as the query results.

I have seen strange behavior before when calling action functions such as
this in SELECT queries, such as single records triggering multiple results or
records being missed. If that is not workable, is it possible to call a
function in my MDB directly from ASP.NET?

Not directly per se, but if you have Access on the web server ASP.NET should
be able to access the function using automation.
 
B

Brian

#4 is where we are aiming at this point. There are several
change-request-related areas where we need staff to review the change for its
affect on other things anyway. The web just passes the info into a
pre-processing table, from whence the users access & post or reject the
changes. It sounds like it will just be safer to put the payment processing
into the square hole of user-review processes rather than trying to fit this
square peg into the round hole of realtime processing.

David H said:
1) I *strongly* recommend not trying to hack what you have to provide credit
card processing. As one of my professors preached, "the true cost of software
development is determined by how much money it takes to fix a problem". If
you are able to hack things and make it work, what will happen when your
ASP.NET guy is long gone and you've been laid off or have left the company?
Its in the best long term interests of your organization to keep things as
simple and as straight-forward as humanly possibly so that if someone who
isn't the least familar with the application comes in he/she can quickly and
easily understand how things work.

2) A webapp hitting more than one table isn't any more difficult than
hitting just one table. All your developer might have to do is factor in
additonal code to check that the queries succeeded. I would comfortably guess
that 90% of the webapps out there from big to small hit more than 1 table.

3) From a design, implementation and debugging standpoint, I would have your
ASP.NET developer adapt your existing CC processing code to ASP.NET. It will
take additional time, but will you save you headaches in the long run as it
would present a design that's logical and consistent from start to finish. It
may seem more complicated than the route that you'd like but you have to way
the pros and the cons of both. Yes, simply passing in a single query is
easier on the website side, but much more complicated on the database side.
Yes, porting the credit card processing code to ASP.NET is more complicated
upfront, but it makes integration with the database much, much cleaner, not
to mention maintenance down the road.

4) All of that being said, you could go the route of NOT providing realtime
credit card processing by simply using the website to capture the order and
payment information and then have an inhouse person actually process the
charge. My organization has done that for four(?) years now. Granted, our
industry is such that delyaing the transaction by 24 - 48 hours doesn't
impact the business as there's typically a full 4-6 weeks between order
placement, fullfillment and billing. If you are wanting to go this route, you
could always set up a means by which the front end watches for new orders and
automatically charges the card. This would be a matter of setting up a hidden
form and in the onTimer event requery the database for new orders and then
automatically submit the payment. Once the result is returned, the database
could then send out an automated email confirming payment (or that the credit
card was declined). You could setup the onTimer_Procedure to fire every 15
mintues or so to keep charging the card as close to the transaction as
possible.

Brian said:
Thank you for your response. I think the security will be OK, because the
proxy will accept requests only from the IP of the website, through a
specific non-standard firewall port, and the SQL statements will only be sent
as a direct consequence of user input after logon to the website. I will make
sure the web developer addresses the security concerns specifically, however.

There is a copy of the FE (.mde) on each PC. The BE (.mdb) is on the server
inside the LAN, accessed via a mapped drive. There are only seven stations,
this is still under ongoing development, and I manage the also network, so I
have not automated distribution - I just push FE updates out via a batch file
when users are off the system. Automation & version control to follow when I
have time between other projects.

My client still needs the ability to process CC charges in-house, regardless
of whether we expose that funcitonality online, since the hub of this
particular business is and will likely always be the in-house staff & app and
not the web. It is a local fresh organic produce-delivery business whose
personal customer relationship & logistical considerations are so dynamic
that the web will not likely ever supplant the local interface as the key
means of input.

I agree that my attempt to figure this out using a function embedded in a
SELECT query does not sound like best practice; however, if I abandon the
concept of processing web-initiated charges in-house, I must of necessity
instead provide a method for the website to transmit the results of the
charge attempt to my in-house app for storage. This involves creation of a
batch header in one table, detail record in another, including charge
information and basic results of the transaction, and any associated errors
into yet another table, then posting successful transactions into a live
Payment table to affect customer balance. It seems to me that it could be
more difficult having the web hand across results that affect three different
tables than it would be to have the web hand off a single set of arguments
and using my existing processes to create all the records.

I could easily just revert to a CSV-file transfer and have the web app drop
the file, then use a form timer to import the data. But that removes the
realtime response required to give the web user an indication of the success
or failure of the charge attempt.

My example of a function call gone awry was a SELECT query which ran just
fine as a standalone - it pulled all 2800 records. I then encapsulated a
function call (as an additional expression in that same query) that did this
(which is what I was really after):

1. Split a semicolon-delimited field in the SELECT source into its
individual elements
2. Looped through the resultant array.
3. Create & executed (CurrentDB.Execute) a SQL INSERT into another table for
each member of the split array, pairing it with another field from the
original table involved in the SELECT query.

This duplicated several and skipped many more. I eventually just opened a
recordset, looped through it, and called the function on each record, and
that worked perfectly. I would have done the loop at the beginning, but this
was all done in an attempt to answer another forum user's question of how to
perform the split, and I was trying to avoid explaining the process of
looping through a recordset without having access to a live copy of the DB.
As soon as I had access to a full copy of the DB by e-mail, I did the
recordset/loop, with perfect success.

Strangely, the function worked perfectly on any single example, however, so
I suspected that shoving 2800 records through a SELECT query that, in turn,
did all the above, was somehow not processing each record consecutively.
Admittedly, there is a lot I do not know here.

David H said:
That changes a lot. Yes ADO can execute queries saved in an Access database.
(More below especially my final comment)

:

I mistyped on thing: the proxy uses ADO.NET, not ASP.NET. Given that I have
spent about 200 hours developing the front end in Access over the last couple
of years, it is still very dynamic (i.e. ongoing development), and I do not
have time to pick up another programming platform, it is highly unlikely that
we will be migrating the FE away from Access any time soon.

Part of the issue here is that the Access DB is stored (BE) & run (FE)
locally on the LAN, but the web server is external, so we have to go through
BTW - The FE should always be run on the local machines. I actually use a
VBScript to pull a copy of the FE down to the local machine and start Access
to avoid any distribution issues.

a firewall. The proxy sits inside the firewall, and the web will communicate
to it through a forwarded port on the firewall.

I do not know how ADO.NET works, but the developer tells me he is able to
call queries directly from my FE and that he has that part working.
ADO is just another way of connecting to the database. If you're familar
with DAO, ADO was initially designed to replace it. Same basic principle
between the two.

I just
figured that if he could call a query, he may be able to also call a function.
Not to my knowledge. ADO is a means of working with the data in a table. It
does not directly expose any of the VBA modules in an Access database unless
you use the function as an expression in the query. (Again, I've done this
before but in conjunction with inserting/deleteing/updating live data, not as
a workaround.)


At any rate, I can call just a function from a query, like this:

SELECT FunctionName([Arg1],[Arg2],[Arg3]) AS FunctionResult
FROM Version;

Without including a table name, I cannot do it, so I included a table that
has just one record but include no results from that table.
Yes that should work, the MVP's would probably have more feedback on that as
it doesn't strike me as a best practice.

The function can
return results to the query. My larger concern, however, is that when I
previously tried calling an active function (i.e. one that included SQL
INSERT statements) from a SELECT query, I got duplicate inserted records and
vast quantities of missing records, so I am not confident that the function
will not misfire in the same way here.
FYI - The ability to imbed an Action query in a SELECT statement is the
basis of SQL injection attacks. I would hope that your WEB guy is at least
aware of this. IT SHOULD NOT IMPACT WHAT YOU"RE TRYING HERE, BUT I"D CONSIDER
IT JUST ANY CASE.
http://en.wikipedia.org/wiki/SQL_injection

If the function encapsulates an ACTION query, the action query should run as
in 'SELECT myFunctionName as EXPR1 from myTableName'. How the function
operates is independent of the manner in which its called. If you're select
query returned duplicate records then yes - you have duplicate inserted
records as the function would have been executed for each specific record
returned by the SELECT statement. As to why you ended up with missing
records, i can't comment, BUT if you're function requires parameters which
are then passed to the function, then that's probably where some of the
funkiness came into play - especially if the parameters were derived from
fields returned by the SELECT query. In other words, the SELECT query may not
have SELECTed the records that you were expecting OR the values returned
provided adverse results when passed to the function.

It may be completely different in that
there is only a single record involved here, but I would hate to find out the
hard way - my function is a credit card processing function, and duplication
would mean multiple charges to the same card.


In ALL honesty, I would give the web developer the code that you have and
have him adapt it to run under ASP.NET as it will eliminate a great deal of
the hassel that your experiencing. Not to mention that it'd be more
consistent with sound development practices. In short, you're trying to do
something that Access was never designed to do.

I worked for Walt Disney World for 9 years in resorts and I'm quite familar
with charging credit cards twice courtesy of the ROOM & TAX CALCULATOR
project - it took a couple of weeks before the problem was identified. In
principle, the developer tried exactly what your attempting - trying to reuse
functionality in a manner that it was never designed for. In the end, it was
written from scratch based on existing functionality.

:

I can't answer every thing, but I can get you started...

:

I have an in-house app for which another developer is writing an ASP.NET
proxy to pass requests in from the web via XML. His intention is to keep the
proxy very generic, passing in just a query name with arguments, receiving
the results of the query, and passing those results back to the web. This
will work fine for retrieving information via SELECT queries, but it leaves
two questions:

1. When he passes in the name of and arguments for an insert or update
query, is there any way to return any type of results?
Not intrinsically. INSERT, UPDATE, DELETE queries do not return any records.
They simply act on the underlying table. SELECT is the only statement that
returns records for whatever use. If you're executing the statement via DAO
or ADO, you can use the .RecordsAffected property to return a count of the
records impacted, but you can't actually return the records. If you need to
return the records after an INSERT/UPDATE (or to confirm a DELETE), you could
do so by marking the records with a BATCH number - a number that unique
identifies the records INSERTED or UPDATED. On the INSERT/UPDATE runs, you
would then use a SELECT with the BATCH number in the WHERE statement. The
BATCH number can be any number that could be uniquely created such as the
current date/time converted to number as in 11/29/2008 9:11:09 PM becomes
39781.8827314815. Obviously, you'd need to a record to the underlying table
to capture the batch number.



2. My app already uses an API to interface with a credit card processor.
Rather than re-inventing the wheel and having the website manage the
customer-to-processor traffic, our intention is to have the customer complete
the information online, then pass that through the proxy to my DB, where I
will pass it along to the processor.

My quandary is this (and I admit, it be more of an ASP.NET question than
Access): is it safe/workable to include a call to the function that runs the
entire CC process in a SELECT query, returning just the CC processing result.
You could conceivable call it by executing a SELECT query that contains a
FUNCTION that itself calls the API. I have done something similar but the
query has always performed a specific function along with calling the
FUNCTION. I don't have any experience with wrapping the function in a SELECT
for sole sake of executing the function and thus the API where the SELECT
doesn't actually return any rows. Which leads me to this...

It sounds like you'd be better off having a front end sitting on the web
server and then opening it using automation to call and execute the API. Now
that I think about it you'd pretty much have to use Automation anyways in
order to execute the SELECT query - the web page has to connect either way.
Along with that, to my knowledge you can't use XML to execute a query in
Access unless you setup Access to be checking for the existance of the XML
file, import it, process it and run the query. Access would have to be
running on the server which brings me back to the idea that you should use
Automation. I am a big fan of generic programming and flexibility, however
given the limitations of Access you can't just have Access run a query
without Access running.


The function does this:

a. Creates entries in a couple of tables to record the attempted
transaction, whether successful or not.
b. Uses the processor's API to process the charge.
c. Record the results (approve/decline, approval code, etc.)

The function works perfectly in my front-end app when called by a local
user; I am just trying to find a way to expose it to the web proxy and think
I may be able to call the function from a SELECT query, returning the
function's value (approve/decline information) as the query results.

I have seen strange behavior before when calling action functions such as
this in SELECT queries, such as single records triggering multiple results or
records being missed. If that is not workable, is it possible to call a
function in my MDB directly from ASP.NET?

Not directly per se, but if you have Access on the web server ASP.NET should
be able to access the function using automation.
 
D

David H

I'd still consider the idea of using code in a hidden form to try and take a
payment and if successful send out an email notification and add something
where users can see credit cards that declined from the site so that they can
contact the cardholder. Most likely a form that specifically shows orders
from the website along with payment status and a command button that attempts
to take payment.

Brian said:
#4 is where we are aiming at this point. There are several
change-request-related areas where we need staff to review the change for its
affect on other things anyway. The web just passes the info into a
pre-processing table, from whence the users access & post or reject the
changes. It sounds like it will just be safer to put the payment processing
into the square hole of user-review processes rather than trying to fit this
square peg into the round hole of realtime processing.

David H said:
1) I *strongly* recommend not trying to hack what you have to provide credit
card processing. As one of my professors preached, "the true cost of software
development is determined by how much money it takes to fix a problem". If
you are able to hack things and make it work, what will happen when your
ASP.NET guy is long gone and you've been laid off or have left the company?
Its in the best long term interests of your organization to keep things as
simple and as straight-forward as humanly possibly so that if someone who
isn't the least familar with the application comes in he/she can quickly and
easily understand how things work.

2) A webapp hitting more than one table isn't any more difficult than
hitting just one table. All your developer might have to do is factor in
additonal code to check that the queries succeeded. I would comfortably guess
that 90% of the webapps out there from big to small hit more than 1 table.

3) From a design, implementation and debugging standpoint, I would have your
ASP.NET developer adapt your existing CC processing code to ASP.NET. It will
take additional time, but will you save you headaches in the long run as it
would present a design that's logical and consistent from start to finish. It
may seem more complicated than the route that you'd like but you have to way
the pros and the cons of both. Yes, simply passing in a single query is
easier on the website side, but much more complicated on the database side.
Yes, porting the credit card processing code to ASP.NET is more complicated
upfront, but it makes integration with the database much, much cleaner, not
to mention maintenance down the road.

4) All of that being said, you could go the route of NOT providing realtime
credit card processing by simply using the website to capture the order and
payment information and then have an inhouse person actually process the
charge. My organization has done that for four(?) years now. Granted, our
industry is such that delyaing the transaction by 24 - 48 hours doesn't
impact the business as there's typically a full 4-6 weeks between order
placement, fullfillment and billing. If you are wanting to go this route, you
could always set up a means by which the front end watches for new orders and
automatically charges the card. This would be a matter of setting up a hidden
form and in the onTimer event requery the database for new orders and then
automatically submit the payment. Once the result is returned, the database
could then send out an automated email confirming payment (or that the credit
card was declined). You could setup the onTimer_Procedure to fire every 15
mintues or so to keep charging the card as close to the transaction as
possible.

Brian said:
Thank you for your response. I think the security will be OK, because the
proxy will accept requests only from the IP of the website, through a
specific non-standard firewall port, and the SQL statements will only be sent
as a direct consequence of user input after logon to the website. I will make
sure the web developer addresses the security concerns specifically, however.

There is a copy of the FE (.mde) on each PC. The BE (.mdb) is on the server
inside the LAN, accessed via a mapped drive. There are only seven stations,
this is still under ongoing development, and I manage the also network, so I
have not automated distribution - I just push FE updates out via a batch file
when users are off the system. Automation & version control to follow when I
have time between other projects.

My client still needs the ability to process CC charges in-house, regardless
of whether we expose that funcitonality online, since the hub of this
particular business is and will likely always be the in-house staff & app and
not the web. It is a local fresh organic produce-delivery business whose
personal customer relationship & logistical considerations are so dynamic
that the web will not likely ever supplant the local interface as the key
means of input.

I agree that my attempt to figure this out using a function embedded in a
SELECT query does not sound like best practice; however, if I abandon the
concept of processing web-initiated charges in-house, I must of necessity
instead provide a method for the website to transmit the results of the
charge attempt to my in-house app for storage. This involves creation of a
batch header in one table, detail record in another, including charge
information and basic results of the transaction, and any associated errors
into yet another table, then posting successful transactions into a live
Payment table to affect customer balance. It seems to me that it could be
more difficult having the web hand across results that affect three different
tables than it would be to have the web hand off a single set of arguments
and using my existing processes to create all the records.

I could easily just revert to a CSV-file transfer and have the web app drop
the file, then use a form timer to import the data. But that removes the
realtime response required to give the web user an indication of the success
or failure of the charge attempt.

My example of a function call gone awry was a SELECT query which ran just
fine as a standalone - it pulled all 2800 records. I then encapsulated a
function call (as an additional expression in that same query) that did this
(which is what I was really after):

1. Split a semicolon-delimited field in the SELECT source into its
individual elements
2. Looped through the resultant array.
3. Create & executed (CurrentDB.Execute) a SQL INSERT into another table for
each member of the split array, pairing it with another field from the
original table involved in the SELECT query.

This duplicated several and skipped many more. I eventually just opened a
recordset, looped through it, and called the function on each record, and
that worked perfectly. I would have done the loop at the beginning, but this
was all done in an attempt to answer another forum user's question of how to
perform the split, and I was trying to avoid explaining the process of
looping through a recordset without having access to a live copy of the DB.
As soon as I had access to a full copy of the DB by e-mail, I did the
recordset/loop, with perfect success.

Strangely, the function worked perfectly on any single example, however, so
I suspected that shoving 2800 records through a SELECT query that, in turn,
did all the above, was somehow not processing each record consecutively.
Admittedly, there is a lot I do not know here.

:

That changes a lot. Yes ADO can execute queries saved in an Access database.
(More below especially my final comment)

:

I mistyped on thing: the proxy uses ADO.NET, not ASP.NET. Given that I have
spent about 200 hours developing the front end in Access over the last couple
of years, it is still very dynamic (i.e. ongoing development), and I do not
have time to pick up another programming platform, it is highly unlikely that
we will be migrating the FE away from Access any time soon.

Part of the issue here is that the Access DB is stored (BE) & run (FE)
locally on the LAN, but the web server is external, so we have to go through
BTW - The FE should always be run on the local machines. I actually use a
VBScript to pull a copy of the FE down to the local machine and start Access
to avoid any distribution issues.

a firewall. The proxy sits inside the firewall, and the web will communicate
to it through a forwarded port on the firewall.

I do not know how ADO.NET works, but the developer tells me he is able to
call queries directly from my FE and that he has that part working.
ADO is just another way of connecting to the database. If you're familar
with DAO, ADO was initially designed to replace it. Same basic principle
between the two.

I just
figured that if he could call a query, he may be able to also call a function.
Not to my knowledge. ADO is a means of working with the data in a table. It
does not directly expose any of the VBA modules in an Access database unless
you use the function as an expression in the query. (Again, I've done this
before but in conjunction with inserting/deleteing/updating live data, not as
a workaround.)


At any rate, I can call just a function from a query, like this:

SELECT FunctionName([Arg1],[Arg2],[Arg3]) AS FunctionResult
FROM Version;

Without including a table name, I cannot do it, so I included a table that
has just one record but include no results from that table.
Yes that should work, the MVP's would probably have more feedback on that as
it doesn't strike me as a best practice.

The function can
return results to the query. My larger concern, however, is that when I
previously tried calling an active function (i.e. one that included SQL
INSERT statements) from a SELECT query, I got duplicate inserted records and
vast quantities of missing records, so I am not confident that the function
will not misfire in the same way here.
FYI - The ability to imbed an Action query in a SELECT statement is the
basis of SQL injection attacks. I would hope that your WEB guy is at least
aware of this. IT SHOULD NOT IMPACT WHAT YOU"RE TRYING HERE, BUT I"D CONSIDER
IT JUST ANY CASE.
http://en.wikipedia.org/wiki/SQL_injection

If the function encapsulates an ACTION query, the action query should run as
in 'SELECT myFunctionName as EXPR1 from myTableName'. How the function
operates is independent of the manner in which its called. If you're select
query returned duplicate records then yes - you have duplicate inserted
records as the function would have been executed for each specific record
returned by the SELECT statement. As to why you ended up with missing
records, i can't comment, BUT if you're function requires parameters which
are then passed to the function, then that's probably where some of the
funkiness came into play - especially if the parameters were derived from
fields returned by the SELECT query. In other words, the SELECT query may not
have SELECTed the records that you were expecting OR the values returned
provided adverse results when passed to the function.

It may be completely different in that
there is only a single record involved here, but I would hate to find out the
hard way - my function is a credit card processing function, and duplication
would mean multiple charges to the same card.


In ALL honesty, I would give the web developer the code that you have and
have him adapt it to run under ASP.NET as it will eliminate a great deal of
the hassel that your experiencing. Not to mention that it'd be more
consistent with sound development practices. In short, you're trying to do
something that Access was never designed to do.

I worked for Walt Disney World for 9 years in resorts and I'm quite familar
with charging credit cards twice courtesy of the ROOM & TAX CALCULATOR
project - it took a couple of weeks before the problem was identified. In
principle, the developer tried exactly what your attempting - trying to reuse
functionality in a manner that it was never designed for. In the end, it was
written from scratch based on existing functionality.

:

I can't answer every thing, but I can get you started...

:

I have an in-house app for which another developer is writing an ASP.NET
proxy to pass requests in from the web via XML. His intention is to keep the
proxy very generic, passing in just a query name with arguments, receiving
the results of the query, and passing those results back to the web. This
will work fine for retrieving information via SELECT queries, but it leaves
two questions:

1. When he passes in the name of and arguments for an insert or update
query, is there any way to return any type of results?
Not intrinsically. INSERT, UPDATE, DELETE queries do not return any records.
They simply act on the underlying table. SELECT is the only statement that
returns records for whatever use. If you're executing the statement via DAO
or ADO, you can use the .RecordsAffected property to return a count of the
records impacted, but you can't actually return the records. If you need to
return the records after an INSERT/UPDATE (or to confirm a DELETE), you could
do so by marking the records with a BATCH number - a number that unique
identifies the records INSERTED or UPDATED. On the INSERT/UPDATE runs, you
would then use a SELECT with the BATCH number in the WHERE statement. The
BATCH number can be any number that could be uniquely created such as the
current date/time converted to number as in 11/29/2008 9:11:09 PM becomes
39781.8827314815. Obviously, you'd need to a record to the underlying table
to capture the batch number.



2. My app already uses an API to interface with a credit card processor.
Rather than re-inventing the wheel and having the website manage the
customer-to-processor traffic, our intention is to have the customer complete
the information online, then pass that through the proxy to my DB, where I
will pass it along to the processor.

My quandary is this (and I admit, it be more of an ASP.NET question than
Access): is it safe/workable to include a call to the function that runs the
entire CC process in a SELECT query, returning just the CC processing result.
You could conceivable call it by executing a SELECT query that contains a
FUNCTION that itself calls the API. I have done something similar but the
query has always performed a specific function along with calling the
FUNCTION. I don't have any experience with wrapping the function in a SELECT
for sole sake of executing the function and thus the API where the SELECT
doesn't actually return any rows. Which leads me to this...

It sounds like you'd be better off having a front end sitting on the web
server and then opening it using automation to call and execute the API. Now
that I think about it you'd pretty much have to use Automation anyways in
order to execute the SELECT query - the web page has to connect either way.
Along with that, to my knowledge you can't use XML to execute a query in
Access unless you setup Access to be checking for the existance of the XML
file, import it, process it and run the query. Access would have to be
running on the server which brings me back to the idea that you should use
Automation. I am a big fan of generic programming and flexibility, however
given the limitations of Access you can't just have Access run a query
without Access running.


The function does this:

a. Creates entries in a couple of tables to record the attempted
transaction, whether successful or not.
b. Uses the processor's API to process the charge.
c. Record the results (approve/decline, approval code, etc.)

The function works perfectly in my front-end app when called by a local
user; I am just trying to find a way to expose it to the web proxy and think
I may be able to call the function from a SELECT query, returning the
function's value (approve/decline information) as the query results.

I have seen strange behavior before when calling action functions such as
this in SELECT queries, such as single records triggering multiple results or
records being missed. If that is not workable, is it possible to call a
function in my MDB directly from ASP.NET?

Not directly per se, but if you have Access on the web server ASP.NET should
be able to access the function using automation.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top