Connectivity to a Cloud Provider

M

Mr. JYC

Hi,

I want to use a cloud computing provider with my Access database. I want to
either put both my front and back end in the cloud or just the back end. The
questions that I have are as follows:

Is Access able to connect to a backend database over the Interet using a URL?

Would Access have file locking issues with WebDAV preventing simulatenous
usage?
 
D

Douglas J. Steele

I'm not aware of any way for Access to connect to a database using either
http: or ftp: protocols.
 
J

John W. Vinson

Is Access able to connect to a backend database over the Interet using a URL?

No. It is really NOT designed as an internet application.
Would Access have file locking issues with WebDAV preventing simulatenous
usage?

Serious ones.
 
A

Albert D. Kallal

Mr. JYC said:
Hi,

Is Access able to connect to a backend database over the Interet using a
URL?

if you're talking about an access database, no, on the other hand if you're
talking about SQL server databases, then yes you can most certainly use a
URL.

Most internet providers can give you a couple of sql databases "free" as
part of your web hosting package. In fact even the budget ones like dotster,
or godaddy actually include sql sever, and we talking the whole web site +
sql server for about $10 per month.

So in the above cases, you can't use an access back end, but you can most
certainly use an URL with SQL server. This is a really affordable way to get
into cloud computing. You simply continue to deploy your access front end of
each of the computers. Thus anyone with high speed Internet should be able
to function reasonably well if you've written your application with
performance and SQL server issues in mind. I doing this now...and it works
VERY well indeed.

Another way to do this for even less cost (free) right now is to consider
using SharePoint. If your tables are fairly small (say 2-4000 reocrds) and
you don't have a lot of related tables, then you can use www.officelive.com.

Remember office live.com is simply sharepoint "on line". So access can
up-size your data tables to sharepoint (in the cloud) and once again anyone
with an Internet connection can then share the data in those tables. Your
forms and the rest your application will function as before. There are some
limitations when using sharepoint as compared to using SQL server, but for
the most part issues such as the auto number IDs except are are pretty much
the same behavior when using access with SQL server or sharepoint.

I don't recommend access + SharePoint unless you're using access 2007, as it
has a significant number of changes and optimizations. The beauty of
sharepoint right now is that the current office live version is free for the
1st 5 users and 50, or is it 500 MB of storage. Regardless, the free on-line
editon of SharePoint is more then enough to get you started in cloud
computing for free right now.

The added bonus is you get into sharepoint. So, you can start a study group
if you are at schoold, or if you have a group of people that need to
collaborate on a project and share all their docuemnts in one spot that is
reachambe by the web, then SharePoint can be an ideal solution.

So you can not use a URL to an access back end, but you still have a good
number of solutions for cloud computing and they're all very affordable....
 
D

David W. Fenton

if you're talking about an access database, no, on the other hand
if you're talking about SQL server databases, then yes you can
most certainly use a URL.

I don't think that's right, Albert. You could use a domain/port, but
not a URL. A URL is of the form:

resource_type://domain:port/filepathname?query_string#anchor

(see http://en.wikipedia.org/wiki/URL)

The resource_type: is the protocol by which the connection is
negotiated. In Web browsers we mostly use http, https, ftp, etc.
None of those protocols can be used by Access. When you connect to a
SQL Server across the Internet, you don't use a protocol like http
or ftp, you connect directly -- you request a TCP connection on a
certain port and if it's accepted, the specific communication
details are worked out between the two servers.

So, no. You can't use any form of URL from Access, even for a SQL
Server, precisely because that's not the definition of a URL.
 
A

Albert D. Kallal

David W. Fenton said:
I don't think that's right, Albert. You could use a domain/port, but
not a URL. A URL is of the form:

resource_type://domain:port/filepathname?query_string#anchor

ah...ok. If a url by definition "must" include the resouces_type, then the
above makes sense...

In most cases, the resource type is optional when connecting to a web site.
So while a strict definition of URL "must" include a resource type, for the
most part you don't have to type in http:// into a web browser when typing
in the web address. I'm not sure if it's the browser that allows this
flexibility in not having to include the resource type, or perhaps it's the
DNS system that does this?

So I suppose in a better context the question + answer would be can one use
a "domain name" as a connection string for sql server...and the answer is
yes one can....
 
A

Armen Stein

Most internet providers can give you a couple of sql databases "free" as
part of your web hosting package. In fact even the budget ones like dotster,
or godaddy actually include sql sever, and we talking the whole web site +
sql server for about $10 per month.

But you need to check into this carefully - the last time I checked,
GoDaddy did not make its SQL Server accounts available outside its own
servers. So they would work for a web app also hosted at GoDaddy, but
not for Access or any other external use.

The companies that offer true port 1433 access to SQL Server are
usually more expensive. For example, we offer hosting plans with SQL
Serve for about $28 per month. There are lots of features and
capacities to consider with hosting plans, so you need to do some
research to find the right one.

Armen Stein
Microsoft Access MVP
www.JStreetTech.com
 
D

David W. Fenton

ah...ok. If a url by definition "must" include the resouces_type,
then the above makes sense...

In most cases, the resource type is optional when connecting to a
web site.

No, it's absolutely *not* optional. The UI of your web browser may
be designed to assume HTTP and then check the host and find out for
sure that it's an HTTP server and then supply the protocol for you,
but URLs without a protocol don't work at all. If you logged your
browser's connection strings you'd see that they always include the
protocol, without exception.
So while a strict definition of URL "must" include a resource
type, for the most part you don't have to type in http:// into a
web browser when typing in the web address.

Not having to type it is not at all the same thing as not needing
the protocol to be specified.
I'm not sure if it's the browser that allows this
flexibility in not having to include the resource type, or perhaps
it's the DNS system that does this?

DNS? What does DNS have to do with it? DNS servers don't have
anything to do with the web per se. They are simply dictionaries
that translate a domain into an IP address.
So I suppose in a better context the question + answer would be
can one use a "domain name" as a connection string for sql
server...and the answer is yes one can....

That's correct. The fact that you only type a domain name into your
browser's address bar does not in any way change the reality that
the protocol has to be supplied for the URL to work.
 
A

Albert D. Kallal

The companies that offer true port 1433 access to SQL Server are
usually more expensive. For example, we offer hosting plans with SQL
Serve for about $28 per month. There are lots of features and
capacities to consider with hosting plans, so you need to do some
research to find the right one.

Ok...I was not sure about go daddy, but do know dotster does allow this.

So, dotster does allow external connects to sql server.

For the monthy total of $9.45 includes:

a.. 100GB Storage
a.. 1.5TB Transfer
a.. Host up to 50 domains
a.. SiteBuilder Unlimited
a.. ASP.NET

and 25 sql server databases. (sql server 2000)

And if the above is too expensive, then for $6.75 you get

a.. 10GB Storage
a.. 300GB Transfer
a.. SiteBuilder
a.. 100 email accounts

and 10 sql server databases

If you go with linux hosting + MySql, the above becomes $5.75 per month..

http://www.dotster.com/products/hosting/plans/

So, I was not sure about go daddy...but there are some really low cost sites
out that that provide MS sql server and allow outside connections. The
problem with dotster is you get sql 2000, and there is no SSL connection
which is not exactly the best from a securty point of view.

I am using dotser for "speed" testing access applications that work/connect
over the internet, and the performance is very good for what amounts to
pocket change.

So, there is some real nice low cost providers that are less then $10 per
month.
 
A

Albert D. Kallal

David W. Fenton said:
No, it's absolutely *not* optional.

I was actually talking about typing it in, but you can connect to
servers, and you don't always have to specify the resource type.
The UI of your web browser may
be designed to assume HTTP and then check the host and find out for
sure that it's an HTTP server and then supply the protocol for you,
but URLs without a protocol don't work at all. If you logged your
browser's connection strings you'd see that they always include the
protocol, without exception.

The above will depend on the web server. For example, when I execute
ftp, or connect to sql server, no resource type is being specified at
all. I believe the server uses the port number to determine the default
protocol.

So, you can use sql services, or ftp services, and you no specifying
the protocol.

So, I am not 100% convinced that the resource type MUST be passed in all
cases.

It might be a standard, and I suspect it a good idea to include the resource
type, but in the case for ftp, I don't see the resource type being
passed anywhere in the DataStream.

I do have SharePoint running here, so I could perhaps play around with a web
server and see if I can get anything to work without passing http.

So with the given information I have, it would seem that specifing http:
when asking the server to dish out web pages is a good idea, but for
other resouce types like ftp etc, that is not the case. And, I suspect the
issue of having to include the http: is much an issue of how the server and
client are setup.
Not having to type it is not at all the same thing as not needing
the protocol to be specified.

Well, I am using ftp, and I not entering the resource type in the url. In
fact, I don't really need to use a url, we are simly using a domain name.
The fact that you only type a domain name into your
browser's address bar does not in any way change the reality that
the protocol has to be supplied for the URL to work.

This is not 100% given. For ftp transfers, and things like sql server I
don't have to specify
the resource type...only the domain name. I would not be surprised if some
web servers would also
dish out web pages when requests are done on port 80...
 
D

David W. Fenton

I was actually talking about typing it in, but you can connect to
servers, and you don't always have to specify the resource type.

From a web browser? If you're not typing the protocal, the web
browser is supplying it. That's the way web browsers (and URLs)
work.
The above will depend on the web server. For example, when I
execute ftp, or connect to sql server, no resource type is being
specified at all. I believe the server uses the port number to
determine the default protocol.

The web browser may choose the protocol based on a response from the
web server, yes.
So, you can use sql services, or ftp services, and you no
specifying the protocol.

From a web browser?
So, I am not 100% convinced that the resource type MUST be passed
in all cases.

From a web browser? Web browser primarily communicate via HTTP but
support some other protocols (FTP, Gopher, maybe others). I don't
know that a web browser could do anything with what a SQL Server
would send back, as the result will not be in a form that web
browsers are designed to handle.
It might be a standard, and I suspect it a good idea to include
the resource type, but in the case for ftp, I don't see the
resource type being passed anywhere in the DataStream.

From a web browser?

The question here is the definition of a URL. It requires a
protocol. If there's no protocol, it's not a URL, just a host name.

I can telnet by specifying a hostname/post, and the Telnet app takes
care of negotiating the appropriate communication protocol. But I
don't supply Telnet with a URL.
I do have SharePoint running here, so I could perhaps play around
with a web server and see if I can get anything to work without
passing http.

If you use the XMLHTTP object, you try passing a hostname without
the protocol and see what you get back.
So with the given information I have, it would seem that specifing
http: when asking the server to dish out web pages is a good idea,
but for other resouce types like ftp etc, that is not the case.
And, I suspect the issue of having to include the http: is much an
issue of how the server and client are setup.

Definitionally, without the protocol, it's not a URL. Period. End of
statement. And the URL was invented by Tim Berners-Lee for use by
his worldwide web. For WWW, that means http/https, but web browsers
(i.e., specific implementations of worldwide web clients) also
support other protocols besides http/https, but still send a full
URL when requesting data from a server.
Well, I am using ftp, and I not entering the resource type in the
url. In fact, I don't really need to use a url, we are simly using
a domain name.

But the browser is supplying it, perhaps because it connects to the
server, finds its default port is port 21 and then supplies the FTP.
This is not 100% given. For ftp transfers, and things like sql
server I don't have to specify
the resource type...only the domain name. I would not be surprised
if some web servers would also
dish out web pages when requests are done on port 80...

If you're connecting on your SQL Server port without a protocol and
your browser is displaying something, I'm curious as to what you're
getting back.

Definitionally, a URL requires the protocol.

Your original statement was simply wrong.
 
G

Glynn

Hi,

I have performance concerns about connecting Access 2007 and SharePoint: I
have an Access 2007 client linked (via broadband cable, Roadrunner) to a
remote SharePoint site hosted on Microsoft SharePoint Online.

A simple delete all query on a table of about 100 rows takes about 45
seconds. The SQL: "DELETE ReportOpenInvoices.* FROM ReportOpenInvoices;"
where ReportOpenInvoices is a table with an autonumber primary key (ID). Is
there a more efficient method to write the query?

I looked at performance tips to speed up Access performance and one
suggestion was to have a persistent connection to the linked database by
opening an Access database variable in VBA using the DAO OpenDatabase method
and to keep this variable open as long as my application is running. I had
the following suggestion:

To maintain the open connection, open a global variable to one of the linked
tables.

[Put this in a public dim area]
dim grs as dao.recordset

[Put within some code when the database starts]
set grs = currentdb.openrecordset("tablename")

However, performance did not improve.

It is my understanding that SharePoint is built on top of SQL Server. Is
that true? If so, it would seem that a bulk delete would not encounter a
performance problem unless there was some sort of single row
processing/acknowledgement, but I am at a loss here.

It was also suggested to use a "pass-through" query. The pass-through query
seems to want a DSN name. If the SharePoint site is hosted by Microsoft
Sharepoint Online, how is the SQL Server name determined?

Again, I so appreciate your help and guidance on this subject, as I am
trying to use Access as a front-end connection to Microsoft SharePoint
Online.

If Access 2007 is not an appropriate tool for this strategy, then please let
me know.
 
A

Armen Stein

It is my understanding that SharePoint is built on top of SQL Server. Is
that true? If so, it would seem that a bulk delete would not encounter a
performance problem unless there was some sort of single row
processing/acknowledgement, but I am at a loss here.

SharePoint is built "on top" of SQL Server, but it isn't using SQL
Server table structures directly. It uses another layer of "attribute
tables" that while very flexible, are less efficient. I know the
Access team is working on improving Access/SharePoint performance with
each version. Taking 45 seconds to delete 100 records is unacceptable
though.
It was also suggested to use a "pass-through" query. The pass-through query
seems to want a DSN name. If the SharePoint site is hosted by Microsoft
Sharepoint Online, how is the SQL Server name determined?

I don't know how you would use a passthrough query for SharePoint. The
suggestion would seem to be for SQL Server, where passthrough queries
are a powerful tool. Hopefully Albert will jump in here, as he has a
lot of experience with Access & SharePoint.

Another possibility, although perhaps not fully baked for production
usage yet, is SQL Azure:
http://www.microsoft.com/azure/sql.mspx

Armen Stein
Microsoft Access MVP
www.JStreetTech.com
 
A

Albert D. Kallal

A simple delete all query on a table of about 100 rows takes about 45
seconds. The SQL: "DELETE ReportOpenInvoices.* FROM ReportOpenInvoices;"
where ReportOpenInvoices is a table with an autonumber primary key (ID).
Is
there a more efficient method to write the query?

Unfortunately there not a more efficient way. Each delete command occurs
client side and is sent down the wire record by record. The sql command is
NOT sent to SharePoint.
I looked at performance tips to speed up Access performance and one
suggestion was to have a persistent connection to the linked database

The above tip ONLY applies to NON server connections. So, for a front end +
back end on a office LAN is ONLY for that suggestion.

So, for Sql-server, MySql, or SharePoint, that connection tip DOES NOT apply
at all. It will NOT speed up those quires for any of these server based
systems (so that is only a non server trick/suggestion)
It is my understanding that SharePoint is built on top of SQL Server. Is
that true?

Yes, it is True, but the access data provider is connecting to the
SharePoint lists,
and they really don't know about sql at all.
If so, it would seem that a bulk delete would not encounter a
performance problem unless there was some sort of single row
processing/acknowledgement, but I am at a loss here.

The assuming is correct. Each record delete command goes down the wire.
It was also suggested to use a "pass-through" query.

Pass-though query don't apply to SharePoint lists. (or SharePoint at all).
there is NO such ability. I think this would be my #2 feature request!
If Access 2007 is not an appropriate tool for this strategy, then please
let
me know.

Well "strategy" is a broad word here.

If your application has to update lots of records in one go, then SharePoint
not likely to work well. There is NO CONCEPT of a pass-through query. All
record updates MUST be done local. So, while you can use/execute sql
commands on that linked table, they run local.

You can have a SharePoint table with 5000 records in it, but your
application
better not have to touch or update all those 5000 records.

Sql server will perform QUITE a bit better then SharePoint over the internet
WHEN you doing updates to more then one record (such as a delete as you
have, or even a sql update command).

So, while you can restrict records into your forms (and reports), the
instant you need to deal with many records, then you have to use caution and
make extra efforts here to determine if SharePoint going to be appropriate.

I have stress tested some SharePoint lists up to 80,000 records linked to
ms-access OVER the internet to get a feel for this stuff. That table is like
looking at a huge mountain and you better be only picking up one rock off
that mountain. If you do something that going to update all those records,
you would be talking hours of time to update.

Even with the off-line + replication mode that 2007 has, if you update
records, the record then has to make the trip up the wire. (I am assuming a
local cache tables here). Local cached tables HELP VERY much, but for
something like your delete command, they don't help.

Since SharePoint lists don't support referential integrity, then you have to
write your own cascade delete routines.

So, if you have a sales list of 1200 contacts that you need your 3 sales
people to share on the road then SharePoint is ideal.

If you have an application with 10,000 or more records, lots of tables and
lots of record code, then SharePoint is not going to work for you here.
So, lots of related tables, or tables beyond the 5000 record range are not
going to work well if your application needs to work with major parts of
that
data.

For applications that deal with lots of tables, and related data, then you
likely better off to use a cheap-o web provider that allows external sql
server connections. I am doing this for some clients and performance is
REALLY fantastic. I started doing this with my own sql server, and the
performance was good, but wow, eliminating my upload path out to the
internet and using an ISP is REALLY good. I mean, darn right fast!

I am still a big fan of office live, but if your application requirements
don't
fit within the limitations of SharePoint lists, then SharePoint not the
solution for you. It not a plug in replacement sql server system. So, for
most
cloud type solutions, sql server going be better, ESPECIALLY for existing
applications that do all kinds of things that SharePoint likely will not
like!

It takes good software designs to work with SharePoint or sql server on the
internet. However, Sql server is MORE forgiving in this regards since you
can resort to pass-though quires for updates and other types of things.
The other thing that really helps is using views especially when they
include aggregate values from other tables but still result in ONE line
of data. So, 10 records summary is instant speed wise with sql server
because it only 10 records coming down the wire. That summary might be
the result of 1000's of records, but you bandwidth in these cases is
even LESS then what you get out of a typical split front/back end
system in your office network. So, any type of optimizing with sql server
works absolute miracles...especially if you don't have a sql that has to
upload to your ISP and then download to your customers. As I mentioned,
when I removed that 1/2 of the up-load equation, the performance turns
out to be really good! About as good as running on my local lan!

I am under NDA on the new SharePoint stuff, but I will say that MS is very
much listing to our needs for better performance.
 
D

David W. Fenton

If you have an application with 10,000 or more records, lots of
tables and lots of record code, then SharePoint is not going to
work for you here. So, lots of related tables, or tables beyond
the 5000 record range are not going to work well if your
application needs to work with major parts of that
data.

I have never built an app for any of my clients that really fit
comfortably below that 10K limit.

[snip]
I am still a big fan of office live, but if your application
requirements don't
fit within the limitations of SharePoint lists, then SharePoint
not the solution for you. It not a plug in replacement sql server
system. So, for most
cloud type solutions, sql server going be better, ESPECIALLY for
existing applications that do all kinds of things that SharePoint
likely will not like!

Thanks, Albert, for doing the work to get up to speed on SharePoint
and then letting us all know what you've learned. I never felt
SharePoint was an adequate replacement for Jet replication (because
of the lack of RI), but this makes it clear that it's not even close
to being adequate.

I just can't even think of one of my clients' apps that would be
appropriate for SharePoint.

I guess I just don't get what SharePoint is for with respect to
anything database-related.
 
G

Glynn

Hi Albert,

I appreciate your feedback and understand that SharePoint should not be a
replacement for a relational database.

However, acting on thousands of rows in large tables is one thing, but
taking 45 seconds to delete 100 rows in a sharepoint list is another. Unless
there is something fundamentally wrong with connecting an Access 2007 client
linked (via broadband cable, Roadrunner) to a remote SharePoint site hosted
on Microsoft SharePoint Online, then the Access team has missed the cloud
target.

If each delete record goes up the wire individually, would an approach that
uses web services, a.k.a. Using Microsoft Windows SharePoint Services with
the Microsoft Office System,
http://msdn.microsoft.com/en-us/library/aa159897(office.11).aspx improve
performance?

Or, would a Visual Studio VB.Net client using web services eliminate the
indvidual record issue or would it encounter similar performance problems as
Access.

As I have posted in MSDN Sharepoint Dev and Pgm'ing and Innovate on Office
forums, as well as MS Technet Office Sharepoint and Sharepoint Online forums,
it is somewhat perplexing there are not more performance issues and
solutions. Your postings have been much appreciated.
 
A

Albert D. Kallal

Glynn said:
Hi Albert,

I appreciate your feedback and understand that SharePoint should not be a
replacement for a relational database.

However, acting on thousands of rows in large tables is one thing, but
taking 45 seconds to delete 100 rows in a sharepoint list is another.

Unfortunately it not another issue. It is the nature of how the connection
to those lists work. The data processing is done local, and it done record
by record. There is not a lot you can do when all processing has to occur
client side. Your hands are tied in this fashion.
If each delete record goes up the wire individually, would an approach
that
uses web services, a.k.a. Using Microsoft Windows SharePoint Services with
the Microsoft Office System,
http://msdn.microsoft.com/en-us/library/aa159897(office.11).aspx improve
performance?

Well in those examples we don't see any sql delete commands. if you pull
down the those 100 records as one string, modify it and sent it back up, you
likely see better performance, but you sill pulled the WHOLE list down,
modify it, and then push up the whole list back.

We actually often in a better position using ms-access to update SharePoint
stuff because it does all the dirty work and allows us to deal with one
record at a time.
Or, would a Visual Studio VB.Net client using web services eliminate the
indvidual record issue or would it encounter similar performance problems
as
Access.

See above, it might, but few have bothered to test this. And, it not going
to be a great solution that works with ms-access anyway. (but, why not
write a few lines of code and try it out?).

As mentioned, ms-access does a great job of allow you to deal
with one record at a time out of those lists now. Part of this issue is
really the nature of client server, but *ALSO* that of a
design that allows off-line disconnected mode also. And, there is also need
for scalability to 1000's of users. (if not 100's of thousands of users).

It is really a different way of thinking. If you goal is to write keyboard
heavy applications for the iPhone, and then ask how come there not better
keyboard support for the iPhone, the answer is it simply not the idea and
how it works.

So, it really comes down to a paradigm change, and thinking in old ways for
a different technology does not work well. The success of the SharePoint
platform is likely beyond what any would have imagined (fastest product to 1
billon in sales for Microsoft in their history).

So, like it not really practical to write a 3d video game in HTML, or
expect heaving word-processing or keyboard applications for the iPhone, the
world of a disconnected SharePoint list needs different considerations in
your designs.
As I have posted in MSDN Sharepoint Dev and Pgm'ing and Innovate on Office
forums, as well as MS Technet Office Sharepoint and Sharepoint Online
forums,
it is somewhat perplexing there are not more performance issues and
solutions.

True but it somewhat perplexing that we don't see 3d applications in HTML,
or applications that rely on lots of keyboarding for the iPhone. There all
good reasons for these issues. So, a company boss might have difficulty
grasping how come their employees can't do all their letter writing on their
iPhone since the boss just read an article that explains that some companies
find that with a iPhone their employees don't need a desktop computer
anymore.

We often see questions in this newsgroup about how come ms-access is slow.
After asking some more questions, we find they are trying to use a split
database over the internet. They then ask how come no addressed this problem
and how come ms-access over a wan to back end mdb file does not work? The
next question is how come there no articles helping one to boost performance
when using ms-access to a mdb back end over a lan? How come the industry is
not addressing this?

The answer here of course is that file share over a vpn with
ms-access does not work well at all, and it not suited. In a nutshell, the
answer to a split file share over the internet is DON'T do it!! The answer
sucks, but that is the reality here. The solution for the ms-access to an
mdb over a wan is to eliminate the mdb on the back end. (but, some people
can't seem to grasp this concept - note that you don't have this difficulty,
but I just putting this whole issue into perspective here). So, using
*DIFFERENT* methods and technology to grab that back end data is the
solution. In this case it would mean to move the data to Sql server, or even
sharepoint. In other words, the solution is not to make the mdb work over
the internet, but simply to NOT do that! You have to change how you work.

So, in effect your question is really much the same here. However, as I
mentioned, because SharePoint has just become so huge popular in such a
short time, there is a shortage of developers and information on
how to approach some of these issues. And, since SO MANY applications and
developers need to use that SharePoint data lists like sql tables, then we
are seeing HUGE efforts being placed into improving SharePoint lists
performance. MS has heard the needs of developers in regards to working
with SharePoint lists like sql data tables.

I mean MySql in early 2000's was all the rage, and yet it did not even have
referential integrity and cascade deletes, but if read the trade magazines
of that time you would think MySql was the 2nd coming. So, SharePoint stuff
is new, and you also need a bit of paradigm change in how you deal with and
use data.

When you look at something like eBay, they don't delete records, and when
they can avoid updating records, they do that too. So, for expired records,
they not even update some column for expiry, but simply filter records by
some kind of date stamp (can't afford the processing to update those
records). They then perhaps during some down time then scrub out old
records.
This is done for performance and scalability. So, differing solutions and
different approaches are often needed when you adopt different technologies.

A great example is that in ms-access (even for desktop, no network), our
designs REALLY need to avoid deleting records. So, if one can avoid creating
a temporary table(s) for reports, then one should. if you don't do this,
then we get serous bloat in the database.

If you cooking fish, it don't take a lot of heat and energy (a bit of steam
can cook it fast). However, a big rib roast needs some serious energy when
you cook it. Different horses for different courses is not only a lesson
in our IT industry, but is one of your daily life. The same concepts of
adapting for different situations when cooking or working in the IT industry
is not a whole lot different here.

Most of the time, it not climbing the mountain, but simply avoiding the
mountain all together that results in success in the IT industry.
 
J

James A. Fortune

Albert said:
If you cooking fish, it don't take a lot of heat and energy (a bit of steam
can cook it fast). However, a big rib roast needs some serious energy when
you cook it. Different horses for different courses is not only a lesson
in our IT industry, but is one of your daily life. The same concepts of
adapting for different situations when cooking or working in the IT industry
is not a whole lot different here.

I like the general theme of your post, but the proximity of your
analogies has the server cooking different horses for each course -- not
exactly the kind of adaptability you had in mind :).

James A. Fortune
(e-mail address removed)

Note: I take lessons in how to care for and ride horses.

In the 40 years I've been riding, these horses have taught me how they
want to be ridden. They want to move away from pressure. You want to
use the minimum amount of motion necessary to control a horse.
Therefore, if you want to turn right, move your right shoulder up just a
little. That will put a little bit of weight pressure on the horse's
left side while opening the right rein a little to invite the horse to
move in that direction. Always relax and look at where you want to go.
Moving your left foot a little forward at the same time also seems to
help. -- Shelly
 
D

David W. Fenton

I mean MySql in early 2000's was all the rage, and yet it did not
even have referential integrity and cascade deletes, but if read
the trade magazines of that time you would think MySql was the 2nd
coming.

Many of the people using MySQL back then were amateurs who knew
absolutely nothing about database theory or design. I once read a
review of PostgreSQL by someone who posted as a MySQL guru who
marvelled at the mere idea of RI in PostgreSQL, and made the remark
(as I remember it) "I could imagine building whole applications
around this feature!"

This is why so many websites have such horridly denormalized
database structures, because the people designing them don't have
the first clue about how to work with normalized structures. A
client of mine (for whose website I have no responsibility) has a
website built by programmers in India that has a single table, but
each item in the table can have multiple categories. Do they store a
list of the categories in a single field? No! They create a
duplicate record for each category, with only the category field
having different data. The result is that when an item needs to be
deleted from the website, it has to be deleted as many times as
there are categories.

The shocking thing about this is how easy the SQL would be for a
properly normalized data structure (it's a total of 2 additional
tables). But people do what they know how to do, and that results in
messes.

And that's how MySQL got popular. It was free, all the web tutorials
on building database-driven websites used it in their examples, and
so its use increased.

That said, as much as I criticize MySQL, I use it on all the
database-driven websites I'm associated with -- I wouldn't even
consider Windows-based hosting because IIS is still simply not ready
for prime time.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top