Putting the VS 2005-Generated Data Access Layer Into Perspective

o more with less code is the slogan of Visual Studio 2005. When it comes to reducing the amount of written code, wizards are definitely a viable option. Visual Studio 2005 has a lot of wizardry in it, especially to generate data access code. Any code that gets silently injected in your project follows a strict logic and a well-known design pattern. A full understanding how Visual Studio 2005 does it puts you on the right track to modify and extend the code to build your made-to-measure data access layer. This article dissects the code behind table adapters and binding source components to unveil patterns and best practices.

One of the most popular selling points that Microsoft has pushed since the launch of the .NET Framework 2.0, Visual Studio 2005, and ASP.NET 2.0, is that you need to write much less code, and often no code at all. No code for what? For software applications? Is this the age of intelligent programming and talking machines, where you talk to computers and they magically understand and behave consequently? No. No such rocket science lives here as yet, thankfully.

Author’s Note: For more information on data access design patterns, you might want to take a look at Martin Fowler’s book “Patterns of Enterprise Application Architecture,” published by Addison-Wesley Professional.

More simply, the “no-code-at-all” slogan refers to the cumbersome presence of zillions of wizards, add-ins and add-ons, and designers that you interactively program through drag-and-drop, check boxes, and buttons. You “declare” what you want and Microsoft’s tools produce code that does what they think you asked for. You may remember the slogan that made history for programming?what you see is what you get (WYSIWYG)?to introduce rapid-application development tools like Visual Basic. I think you could call the new slogan “what-you-get-is-what-the-tool-thinks-you-want” (an impossible to pronounce, WYGIWTTTYW).

With the 2005 series of products and technologies, Microsoft offers mere mortals tools to do good multi-tier programming. And what’s the essence and quintessence of multi-tier programming if not a serious, well-architected middle-tier?

I seem to hear a sounding voice rising from the back of the audience. Hey, wait a moment! Are you perhaps thinking that Microsoft is committed to (finally) teach developers principles and patterns of object-oriented design (OOD) and programming (OOP)? My answer is, sort of, but not explicitly.

As a matter of fact, in Visual Studio 2005 you find a bunch of Windows Forms and ASP.NET data controls designed to connect to middle-tier objects and create data-driven applications quickly and quite effectively. Admittedly, this is great news no matter the perspective from which you look at it.

By using the newest Visual Studio 2005 data designer, you create code through user-driven wizards. However, the code you get after clicking the Finish button contains much more abstraction than in previous versions of Visual Studio. You work with top-level objects such as table adapters and typed datasets, but in the end you get a thin, but much better than nothing, middle-tier. More importantly, though, you have the option of connecting this top-level API to your own data access layer, thus completing a canonical multi-tiered system.

In any case, you should quit with the tons of ADO.NET free-form code that many developers relentlessly inserted in code-behind ASP.NET pages and Windows Forms events. You’ve always been told to use layer separation, data transfer patterns, and effective data representation. But Visual Studio offered datasets and auto-generated ADO.NET code. Finally, today Visual Studio 2005 offers datasets and custom objects and auto-generates a largely customizable abstraction layer. What’s better, though, is that there’s method in this wizard. And you can recognize popular data access design patterns under the covers.

In this article I’ll examine and discuss the code generated by the Visual Studio 2005 data designer from an object-oriented design perspective.

Composing a Data-driven Windows Form
Let’s proceed step by step with the creation of a data-driven Windows Forms application. The idea is to build a form-based application that displays and edits data from multiple tables in the same database-guess what-the Northwind database. Figure 1 shows the final application.

?
Figure 1: The final destination of the code in this article.

The form counts a few data-bound controls plus a handful of data connector controls. You typically start by adding the BindingNavigator control to let users navigate through the records of a bound data source. The BindingNavigator control has a VCR-like user interface. It simply selects a particular record in the data source and makes it available through a general-purpose programming interface. The BindingNavigator’s user interface makes me recall the command bar of Microsoft Access.

In Windows Forms 2.0, most controls don’t bind directly to a collection object; rather they use an intermediate object?the binding source?which in turn is bound to a classic enumerable data source object. The BindingNavigator control is no exception.

A binding source component is primarily designed to simplify binding between controls on a form and bound data. It also provides a number of additional services such as currency management and change notification. As mentioned, a binding source component adds a layer of indirection between user interface elements and back-end data, as illustrated in Figure 2.

?
Figure 2: Binding source components sit in between the user interface and data source.

You attach the binding source component to a physical data source and then bind the controls on the form to the binding source. From now on, any data-related interaction with the data source is accomplished through the binding source. Typical operations include navigation, retrieval, sorting, filtering, and update.

A binding source component in .NET Framework 2.0 is an instance of a class that extends the BindingSource class. Note that although the terminology is relatively new and specific to .NET Framework 2.0, the core concepts behind a binding source component should be nothing new to expert .NET Windows developers. You may have heard in past years of something called currency managers and binding context?

Well, the binding source component is simply a UI-less component purposely created to let developers manage binding objects from within Visual Studio 2005 and mostly in a declarative manner.

The BindingSource Class
The BindingSource class wraps a data source and exposes it through its own object model. Table 1 lists the main properties of the BindingSource base class.

Table 1: The programming interface of the BindingSource class.

Property Description
AllowEdit Indicates whether items in the underlying data source can be edited.
AllowNew Indicates whether the new items can be added to the underlying data source.
AllowRemove Indicates whether items can be removed from the underlying data source.
Count Gets the number of items in the underlying data source.
CurrencyManager Gets a reference to the associated currency manager.
Current Gets the current item in the underlying data source.
DataMember Indicates a specific list in the data source.
DataSource Indicates the data source that the connector binds to.
Filter The expression used to filter the data source.
IsReadOnly Indicates whether the underlying data source is read-only.
IsSorted Indicates whether the items in the underlying data source are sorted.
Item Retrieves the data source item at the specified index.
List Gets the list that the connector is bound to.
Position Indicates the index of the current item in the underlying data source.
Sort Indicates the column names used for sorting, and the sort order.
SortDirection Indicates the direction the items in the data source are sorted.
SortProperty Gets the PropertyDescriptor object that is being used for sorting the data source.
SupportsAdvancedSorting Indicates whether the data source supports multi-column sorting.
SupportsChangeNotification Indicates whether the data source supports change notification.
SupportsFiltering Indicates whether the data source supports filtering.
SupportsSearching Indicates whether the data source supports searching.
SupportsSorting Indicates whether the data source supports sorting.

It is essential to note that a BindingSource object is designed to manage both simple and complex data binding scenarios, meaning that it incorporates functionality from both CurrencyManager and PropertyManager in the .NET Framework 1.x. In light of this, note that what in Table 1 is referred to as the underlying data source is often a collection (e.g., a typed dataset) but can also be an individual object (for example, a stand-alone DataRow).

As you can see from the properties in Table 1, the binding source component features a Position member that indicates the index of the currently selected data item. The BindingSource class doesn’t provide any user interface, so the “selection” here is purely logical. It is up to bound controls to transform the logical selection into something visible and meaningful to users. The Current property is a shortcut to retrieve the data item found at the currently selected position. The BindingSource class also exposes methods to move the selection backward and forward, to jump to a particular position, and an event to signal that the currently selected element has changed.

To bring up these functionalities and make them quickly and easily emerge to the user interface level, you can use the BindingNavigator control and associate it with a binding source. Each time a user clicks on the VCR-like buttons in Figure 1, the Position and Current properties on the binding source are updated and the CurrentChanged event fires. As in Windows Forms 1.x data binding, controls listening to these events receive notification and can properly update their own user interface. This mechanism makes possible the trick of automatic selection change and view synchronization between otherwise unknown and totally unrelated data grids. Let’s proceed with the definition of a data source in Visual Studio 2005.

Importing Data Sources in Windows Forms Applications
To add data to your Windows application, you start by dropping a BindingSource component onto your form. Next, you set the DataSource property of the component. There are a number of feasible data sources, including arrays, collections, and lists of custom types. Lists are typically extended collections that implement any of the following interfaces: IBindingList, ITypedList, or IListSource. Note that popular ADO.NET container classes such as DataSet and DataTable fall in the last category, as they implement the IListSource interface.

In the beginning, you have no data source in the application and have to create one. When you select the DataSource property in the BindingSource component, you face a window like that in Figure 3. Click on the link at the bottom of the popup window and start the wizard to add a new data source to the current project. Here’s where you start having some fun.

?
Figure 3: Adding a new data source to the project.

Politely enough, Visual Studio 2005 asks where the application intends to get data from. There are three possible scenarios: database, external Web service, or custom object. If you opt for a Web service, Visual Studio 2005 opens the Add Web Reference dialog box and lets you choose either a local or remote Web service and creates the corresponding proxy class. It is then up to you to complete binding to retrieve data and bind it to controls. Similarly, you can choose a custom object inside any of the assemblies referenced by the project.

Authors of the custom set of objects or Web service are responsible for designing and implementing the object model using any approach and pattern they prefer and reckon effective. When you opt for the database option, Visual Studio 2005 generates a lot of code for you. Typically, you add a DataSet component; that is, a set of related tables described through an XSD file. The DataSet component represents an in-memory DataSet object with one or more data tables. So far, the behavior of Visual Studio 2005 is not that much different from Visual Studio 2003 where typed dataset classes were created from declarative XSD files.

Visual Studio 2005 does more though. In particular, it introduces the concept of a table adapter.

Table Adapters
A table adapter provides support for communication between an application and its database. To exemplify, a table adapter connects to a database and executes commands; any returned data is stored to a DataTable object for further processing. You can also use a table adapter to send updates back to the database. A table adapter is an instance of a tool-generated class that acts as an adapter. A table adapter is just a specialization of the adapter class defined in .NET managed providers. Simply put, it is a wrapper object acting as the adapter for a particular table. Interestingly enough, there’s no base class for table adapters.

This code snippet shows the typical signature of a table adapter class.

   Partial Public Class CustomersTableAdapter      Inherits System.ComponentModel.Component      ...   End Class

Internally, a table adapter class incorporates a SqlCommand, SqlConnection, and SqlDataAdapter object. In Visual Basic .NET, the data adapter is decorated with the WithEvents modifier to catch events. Table 2 lists the internal properties that make up a table adapter.

Table 2. Internal members of a table adapter class.

Member Modifier Description
Adapter Private ReadOnly References the data adapter used for communicating with the corresponding database table.
Connection Friend References the connection object used for communicating with the corresponding database table.
CommandCollection Protected ReadOnly Defines a collection of command objects that all together describe the behavior of table adapter. To extend the tasks that can be accomplished through the adapter, you add a new command to this collection.
ClearBeforeFill Public Indicates whether the table should be emptied before filling. True by default.

The internal data adapter is initialized in the InitAdapter method. The method is called from within the get accessor of the Adapter property.

   Private ReadOnly Property Adapter()?_      As System.Data.SqlClient.SqlDataAdapter      Get         If (Me._adapter?Is?Nothing)?Then        ????Me.InitAdapter       ??End?If         Return?Me._adapter      End?Get   End Property

In this context, the adapter is a helper object used to drive standard CRUD (Create, Read, Update, Delete) operations on the underlying table. The adapter defines default T-SQL statements for Insert, Delete, and Update commands. It doesn’t include a Select command. Aimed at populating the data table with data, the adapter implements its Select functionality through a pair of public methods on the table adapter named Fill and GetData.

Any data operation requires a physical connection to the data source. Listing 1 shows the internal implementation of the Connection property. The connection string is stored in the application’s configuration file and can be edited at any time without touching and recompiling the base code. When you set a new connection, the information is automatically passed along to all command objects in the table adapter.

As you’ll see later on, the table adapter class is a well-known element in a popular design pattern for data access. Through the CommandCollection property, it lists all the actions one can execute on the table. In a word?it defines the “behavior” of the table.

   Protected ReadOnly Property CommandCollection()?_       As System.Data.SqlClient.SqlCommand()     ?Get         If (Me._commandCollection?Is?Nothing)?Then          ??Me.InitCommandCollection      ???End?If         Return?Me._commandCollection     ?End?Get   End Property

By default, the CommandCollection contains just one command?the T-SQL command for the Select operation. The following code shows the initialization of the collection.

   Private Sub InitCommandCollection()     _commandCollection = New?SqlCommand(0)?{}      _commandCollection(0) = New?SqlCommand      _commandCollection(0).Connection =?Me.Connection      _commandCollection(0).CommandText?=?_         "SELECT * FROM Customers"      _commandCollection(0).CommandType?=?_         CommandType.Text   End Sub

The default command object in the collection is wrapped by the Fill and GetData methods (see Listing 2), where Fill populates the data table with the results of the command and GetData returns a new DataTable object with the same data. All public methods of the table adapter class are listed in Table 3.

Table 3: Methods of a table adapter class.

Method Description
Fill Populates the data table associated with this table adapter with the results returned by the default command in the command collection.
GetData Returns a newly created DataTable object filled with the data obtained from the execution of the default command in the command collection.
Delete Executes the DELETE command associated with the internal data adapter
Insert Executes the INSERT command associated with the internal data adapter
Update Executes the UPDATE command associated with the internal data adapter. The method features quite a few additional overloads.

Listing 3 shows the signatures of the Delete, Insert and Update methods. As you can see, they are auto-generated based on a selected table and a selected number of columns. The source code for each table adapter, and parent DataSet component class, is located in the designer file of the DataSet component class. For example, if you have a northwind.xsd dataset file, the table adapter code is located in a file named northwind.designer.vb. You can edit signatures and commands for each statement and, for example, replace standard T-SQL with stored procedures running inside transactions.

Finally, it is worth noting that the Update method has a few overloads, as shown below:

   Public Overloads Overridable Function?Update(?_      ByVal dataTable As?CustomersDataTable)?_   As Integer      Public Overloads Overridable Function?Update(?_      ByVal dataSet As?northwndDataSet)?_   As Integer      Public Overloads Overridable Function?Update(?_      ByVal dataRow?As?DataRow)?_   As Integer      Public Overloads Overridable Function?Update(?_      ByVal dataRows()?As?DataRow)?_   As Integer

All these overloads map to one of the standard overloads of the Update method (batch update trigger) on the SqlDataAdapter class. All the methods are implemented as thin wrappers around the corresponding SqlDataAdapter method. Here’s an example:

   Public Overloads Overridable Function?Update(?_      ByVal dataRows() As DataRow)?As?Integer      Return Me.Adapter.Update(dataRows)   End Function

Extending a Table Adapter
When you design a business and data access layer, you basically map entities and relationships to database tables and columns. The expected behavior of each entity is typically represented as a sequence of statements to execute against the database. How would you represent the behavior of an entity? Data access design patterns provide some guidance on this; rest assured that whatever patterns suggested are to be implemented with concrete code. The CommandCollection property of a table adapter is the internal data structure that lists the statements used to hard-code an expected behavior on a table.

?
Figure 4: Tasks to extend and customize a table adapter.

So if you want to add a behavior to the table that represents the Customers table, you need to add a new T-SQL statement to the CommandCollection property and then make it public via a new method on the CustomersTableAdapter class. Visual Studio 2005 provides a simple wizard to do all of this. You trigger the wizard by choosing one of the tasks defined on the table adapter designer, as in Figure 4.

When you select a table adapter component in Visual Studio 2005, it displays a smart tag. After clicking it, Visual Studio displays a menu as in the figure. If you select to edit queries, you get a graphical representation of all the tables in the current data set component (see Figure 5.)

?
Figure 5: Table relationships and table adapters in the Visual Studio 2005 Data Designer.

To edit an existing query, you select the task in the table adapter task list and edit the properties in the Properties window. To add a new task, you select Add Query and follow the instructions of the subsequent wizard. The wizard ultimately guides you through the steps required to define a new command, be it a query or an update statement. When you’re done, you see an additional entry in the task list and some changes to the underlying code.

Suppose that you define a new query statement to load customers by country. The final statement is similar to the following:

   SELECT ... FROM customers [email protected]

The wizard asks you to name the methods that will use the statement to retrieve data (GetDataXXX) and fill the passed data table (FillXXX). You can also choose to generate only one of two methods.

Let’s name the methods GetDataByCountry and FillByCountry (see Figure 6).

The InitCommandCollection method is modified to make room for another command string. The implementation of the two new methods sets up the SelectCommand statement of the adapter with the command extracted from the command collection and executes it. Listing 4 shows the source code of the FillByCountry method.

?
Figure 6: New tasks added to the table adapter.

Remarks on the Generated Code
It is essential to note that the code generated around the DataSet and BindingSource components use the disconnected model and plain T-SQL statements. It goes without saying that using stored procedures and transactions is definitely possible, whether you edit the code through Visual Studio 2005 wizards or manually.

The disconnected model entails that all the data is assumed to be in memory and loaded when the form starts up. Also in this case, though, you can freely edit the auto-generated code, for example to make the application load only the data it needs at that time (lazy loading). By default, the form’s Load event of a data-bound form using the code generated by the Visual Studio 2005 data designer looks like this:

   Sub NorthwindForm_Load(ByVal sender As?Object,?_      ByVal e As EventArgs) Handles?MyBase.Load         CustomersTableAdapter.Fill(nwDataSet.Customers)      ...   End Sub

Visual Studio 2005 places a call to Fill for each table adapter found in the bound DataSet component.

Be aware that Visual Studio 2005 generates code that will be compiled with the rest of the project. In no way, should you accept this code as-is if it doesn’t completely fit your needs and requirements. You can modify it at will; before you do so, though, it is highly recommended that you deeply understand its logic and structure. Do not think that what a wizard does for you is a take-it-or-leave-it choice. You can take it, and then adapt it to your needs. Most of the time this is just what you have to do and the most sensible option.

Common Patterns and Approaches for a DAL
Any reasonably complex system requires a couple of distinct layers to access and manipulate data. You use the Business Logic Layer (BLL) to communicate with the user interface and provide security checks, data validation, as well as additional services such as pre- and post- processing of data. You use the Data Access Layer (DAL) to access and retrieval data. The DAL is the only component that incorporates the API to access the database. CRUD tasks are the goal of a DAL, exposing data to the business layer for data exchange. Layers pile up; the DAL is consumed by the BLL and should not be consumed by the user interface to avoid breaking separation between layers.

When it comes to creating a library to incorporate data access functionality, how should you proceed? Should you use a customized table-based approach like that propounded by the Visual Studio 2005 data designer? Or are you better off generating most of the code using a commercial O/R mapper tool? Should you craft your DAL yourself? Nicely enough, these are not mutually exclusive options and can be mixed together in any doses and combination. So the question arises, what should you do? Let’s understand common patterns. In doing so, you should also be able to spot the method and logic behind the Visual Studio 2005 approach.

In general, there are two main models for designing the backend of a system that deals with data?the Domain model and the Table model.

The Domain model prefigures an object model where each entity is described through ad hoc classes that incorporate both behavior and data. Any instance of a class corresponds to a particular entity in the model; relationships are expressed through properties that cross over involved classes.

In the Table model, you define one class for each table existing in the target database. Any code required to process data is defined on a single class instance. In summary, if your data model includes an Order entity, you end up with one OrderEntity object for each order with the Domain model and one OrderManager object to handle all orders with the Table model. It’s easy to see that the “order-manager” object here is a close relative of the table adapter object in the Visual Studio 2005 designer.

Inside the domain and table abstract models you find various concrete design patterns. The most popular of which are Data Mapper and Table Data Gateway (TDG) respectively.

Applying the Data Mapper Pattern
A data mapper (DM) object does the magic of loading raw data extracted from tables into in-memory objects that closely map to a higher level, entity-relationship data model. The DM pattern is excellent for sophisticated domain logic where the physical structure of data must be abstracted to straighten coding, simplify reading, and guarantee cost-effective maintenance. A highly sophisticated logic has typically a number of additional requirements such as multi-database support and serious unit-testing. In both cases, different deployments of the system, or different steps during the development cycle, may require that the whole data-mapper layer be replaced to apply a different logic, target a different database, or just for unit-testing purposes. The more abstraction you have and the more stratified the system is, the better for you and your customers.

Admittedly, the DM pattern may look a bit too complex at times and is certainly a viable option for rich and complex systems. A simpler variation, ideal for quite simple logic, is the Active Record pattern. (See the “Active Record” sidebar.)

In a DM scenario, you create a bunch of classes for each entity in the abstract data model for the application. You create an entity class, perhaps a type-safe collection class, and a manager class to provide public entry points in the DAL. Listing 5 details some sample code.

?
Figure 7: Architecture of a multi-tier application with patterns.

What about the implementation of the methods in the EmployeeManager class? No matter how much abstraction you put in your design, at some point you need to dirty your hands with connection strings, ADO.NET commands, transactions and the like. You can certainly insert this code directly in the body of the EmployeeManager’s methods. A better approach, though, is using an intermediate set of classes based on the TDG pattern to do just this. Figure 7 shows the overall schema.

The idea is, the presentation layer creates an instance to the manager class of a given entity (say, employee). On one end, the manager class exposes high-level objects to the presentation layer; on the other end, it connects to the underlying DAL for physical data access. There might be various layers between the manager and the DAL. For the greatest flexibility, you might want to use a replaceable factory and target different databases.

Invoked from the presentation layer, the manager class may, in turn, instantiate a data access provider factory class. This class will read from the configuration file information (class and assembly) about the factory class to use to create data providers for each of the supported entities (employee, customer, product, order).

Each factory class implements a contracted interface that basically counts a method to return the data provider for the various entities. Finally, the “data provider” implements an interface that matches the methods on the manager class and uses the TDG pattern to get and set physical data on a particular database.

With this schema, by simply changing an entry in the configuration file you make the whole system work on a totally different database. You have classes?the data provider-where you can write optimized database code?and thanks to contracted interfaces you don’t break upper tiers.

Applying the Table Data Gateway Pattern
In complex systems, you combine the TDG pattern with DM and Factory to create an extremely flexible and extensible solution. Here I must remark upon two architectural aspects. First, you can write straight ADO.NET code directly in the methods of the manager class, get rid of factories, and obtain a pure DM solution. Likewise, you can write a pure TDG solution as discussed in a moment and obtain a Visual Studio 2005-like solution. A combination of patterns versus single patterns is an individual and project-specific choice and trade-off.

In the TDG jargon, the gateway is an object that wraps access to a database table. You play with one object which handles all table rows. Methods on the gateway class typically return ADO.NET objects; input arguments are listed as individual values and are not grouped in containers like entity classes. Here’s an example:

   Public Class EmployeeGateway      Public Function Find(ByVal?id?As?Integer)?_        ?????As?IDataReader     ????:      End?Function      Public Function FindAll()?As?DataTable     ????:      End?Function      Public Sub Save(ByVal?id?As?Integer,?...)??     ????:     ?End?Sub      Public Sub?Insert(...)?     ????:     ?End?Sub      Public Sub Delete(ByVal?id?As?Integer)?     ????:     ?End?Sub    ??:   End Class

The more you add methods here, the more the TDG model evolves into a DM model. If you compare this abstract definition of a gateway with the table adapter class discussed earlier in the article, you find a clear match between the two.

Drawing the Bottom Line
The first consideration that springs to mind is that patterns are sometimes way too abstract, while your ultimate goal as a developer is writing concrete code that actually works. Patterns, therefore, should be your guide; they are not necessarily they the Word that inspires you in each and every action and decision.

Make choices based on your preferences and level of comfort with a given approach, but don’t be afraid to try new things.

It is essential that you make choices based on your preferences and level of comfort with a given approach. At the same time, though, don’t be afraid to try a new thing and a more structured approach to complex issues. It may happen that you like it. Keep things simple but never simpler than needed and be ready and prompt to recognize and handle complexity. Be agile and nimble especially in a complex context; this may be hard to do, but it is too often just the difference that stands between success and failure.

The wizards of the Visual Studio 2005 data designer generate code that works according to a classic TDG approach. There’s nothing bad with it and nothing new or esoteric. The key with wizards is knowing what they do in order to figure out what you can do about it. To fully understand what the Visual Studio 2005 data designer does, a tour of enterprise data access design patterns is essential.

This article first explained internals of Visual Studio 2005 code and then put that into a pattern perspective tracing a hopefully clear route for further enhancements. The Visual Studio 2005 data designer does create an architecturally valid DAL but you have to understand its code and be ready to edit it. The pattern in use, though, may not be appropriate for sophisticated applications. Now you know it; back on to Visual Studio 2005 and enjoy it with reason!

Share the Post:
Share on facebook
Share on twitter
Share on linkedin

Overview

The Latest

homes in the real estate industry

Exploring the Latest Tech Trends Impacting the Real Estate Industry

The real estate industry is changing thanks to the newest technological advancements. These new developments — from blockchain and AI to virtual reality and 3D printing — are poised to change how we buy and sell homes. Real estate brokers, buyers, sellers, wholesale real estate professionals, fix and flippers, and beyond may

man on floor with data

DevX Quick Guide to Data Ingestion

One of the biggest trends of the 21st century is the massive surge in internet usage. With major innovations such as smart technology, social media, and online shopping sites, the internet has become an essential part of everyday life for a large portion of the population. Due to this internet

payment via phone

7 Ways Technology Has Changed Traditional Payments

In today’s digital world, technology has changed how we make payments. From contactless cards to mobile wallets, it’s now easier to pay for goods and services without carrying cash or using a checkbook. This article will look at seven of the most significant ways technology has transformed traditional payment methods.