Browse DevX
Sign up for e-mail newsletters from DevX


Data Access for Partially Connected Applications : Page 4

Modern applications require more sophisticated data access features than a simple connection to SQL Server. Data needs to be available in distributed scenarios as well as offline scenarios. This article provides an architectural overview and implementation examples that support these scenarios.




Building the Right Environment to Support AI, Machine Learning and Deep Learning

Adding Internet Functionality
I referred earlier to a data service based on a Web service, so let's go ahead and implement it. There are a few ways to implement such a service. The simplest is to take whatever command is sent to the data service, extract the command text, and send it off to a Web service that re-creates a real command object from the transmitted command text, fires it against the database, and sends the result back to the client.

The trickiest part is that command objects are not serializable. In other words, they can not be sent as Web service parameters. This is a stumbling block, because although you can extract the command text from the command object, you are likely to need more information to support all scenarios. You can work around this problem by extracting all the information you need from the command object and send it in some other way. For instance, you can put the required information into an XML string, send that to the Web service, and use it to re-create a real command object on the server.

Listing 3 shows the implementation of a Web service class that is called by the client stub shown in Listing 4. As you can see, the code on the Web server simply uses a standard SQL Server Web service to connect to the database and to execute the desired commands. The only part worth mentioning is the GetCommandFromXml() method, which turns command object information into a real SqlCommand object that is used for the actual execution of the command.

The client code shown in Listing 4 is also pretty simple. For the most part, it simply routes calls to the Web service implemented in Listing 3 (referred to as WebDataService in this implementation). Once again, there only is one other noteworthy part: the GetSerializedCommand() method, which turns all the parts of the command object you care about into XML, so it can be passed to the Web service.

As you can see, the service is very simple overall. This fact must be attributed to the simplicity of creating Web services in ASP.NET. Many difficult aspects, such as transferring query results over the Internet, are handled automatically.

At this point, the Web service-based data service is complete. With the current system configuration, this service will be used automatically whenever the "normal" SQL Server data service is not valid. If you want to see the new service in action right away (it can be difficult to simulate failover on a single computer), simply change the configuration to the following:

<?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="UserId" value="devuser"/> <add key="Password" value="devuser"/> <add key="Server" value="(local)"/> <add key="Database" value="Northwind"/> <add key="dataservice" value="webservice"/> </appSettings> </configuration>

Re-run the data access example from further above without any changes to see the new service in action. Note that there is no need for developers to even know that the data now travels over the Web.

At this point, you probably need to ask yourself whether you have chosen the best way of accessing data over the Internet. The approach clearly works, and I have seen it used in very similar form in production applications. However, I must point out a few shortcomings and characteristics of the current implementation.

For one, there is a performance penalty that comes along with the Web service approach. Using the failover approach you have implemented, applications based on this infrastructure will perform very well on LANs, as the direct access data service will be used. However, whenever the Web service approach is used, performance will not be as stellar. It often surprises me how well Web services perform, but there definitely is a difference. There are a number of ways to improve performance. One of the biggest areas of inefficiency lies in the way .NET serializes ADO.NET DataSets for transport over Web services. You could battle this problem by creating a custom serializer, similar to the approach you took with the command object.

You could also achieve significantly better performance with alternate Internet technologies, such as .NET Remoting or the upcoming Windows Communication Foundation (WCF, formerly known as Indigo). These approaches allow for binary communication, which performs much better than XML-based communication utilized by Web services.

Another issue is security. It is conceivable and somewhat probable that someone would call your data service Web service outside of your application, and thus gain full access to your database. This is a great risk that must be addressed in production implementations. Luckily, there are a number of ways to enhance security.

First, you need to make sure that only people who have any business in your database can access the Web service. This can be handled through standard authentication mechanisms. You also need to make sure that data is securely encrypted while it travels over the wire, so it can not be intercepted and viewed. This can be handled through Web service security. There are several ways of encrypting data in transit, and you can pick whichever you are comfortable with. However, do not implement your own encryption mechanism. No matter how confident you are in your algorithm, and no matter how hard it may seem to crack to you, it is almost certainly going to be significantly weaker than even simple standard encryption mechanisms available to .NET Web service developers.

Another security problem you need to address is tampering. Encryption tends to counter this problem, but if you want to really be certain that nobody messes with your data while it travels over the Internet, you can digitally sign it. At this point, you have a system that only authorized people and applications can use, and data is not usable and not modifiable.

You are still not as secure as I would like us to be. Another scenario you may want to protect ourselves against is "replay attacks." Someone might be able to intercept your message, and although it could not be read or altered, it could be sent to your server again at a later point in time. This allows a potential hacker to retrieve whatever data happened to be requested by that message, store it locally, and try to hack it in the comfort and privacy of his or her own office, with no time limitations, all of which greatly increases the hacker's chance of success. You can prevent replay attacks by adding a timing mechanism to the service, such as a time stamp or a ticket. Each ticket is only valid for a short period of time (perhaps a matter of minutes), reducing the chance of replay attacks greatly. (Of course you need to make sure that your ticket information is also encrypted and/or signed, so it can not be altered.)

At this point, you are getting closer to a reasonably secure system. I am sure you can even come up with further security measures on your own. However, there is one more thing I strongly recommend: Let's say all of this wasn't enough and a hacker has managed to get into your system after all. Remember: they have all day to hack systems while developers have many other things to worry about, which gives them a great advantage and the ability to hack into systems you would consider quite secure. I still want to put up another wall of defense.

Instead of allowing full access to the database, it would be much better to only allow access to stored procedures. This can be done easily by setting access permissions so the account the Web service uses to log in only has access to certain stored procedures. This allows you to judge very clearly what sort of damage hackers will be able to do to your systems in a worst-case scenario, as they can not do anything that is not supported by the exposed set of stored procedures. Of course, this may still be bad enough, but normally not nearly as bad as what they could do with low-level database access.

If you implement all these things, you have put up a number of walls that will deter hackers for a while. You probably would have to have extremely valuable data in your database for hackers to still make it worthwhile to break into your system. Of course there are scenarios where this level of security is not sufficient. In that case, you should probably consult with a company specializing in security, or question whether your application is suitable for distributed and mobile use.

Thanks for your registration, follow us on our social networks to keep up-to-date