Rahul Subramaniam
Rahul Subramaniam is CEO of EngineYard, DevFactory, and FogBugz. He is also Head of Innovation at ESW Capital. He has a long history working in the computer software industry, with a background in algorithms, python, agile software development, and software project management.

Three-tiered architecture is a myth. You only think you have it. In reality, you have a two-tiered architecture and a dream.

The “traditional” definition of three-tiered architecture says it consists of:

  • A presentation layer
  • An application layer
  • A database layer.

True three-tiered architecture allows you to scale or migrate any individual layer with no impact whatsoever on the other two layers — you can make a change in any layer without having to change a single line of code in any of the other layers.

It’s a great idea — it sets up a situation where you can plug and play any component cleanly and smoothly.

That is completely true of the web application layer. The development of the HTTP protocol DID create the complete web application layer. It’s completely decoupled from the application and database layers. The HTTP separation of web and app server tiers was more successful because it specifies the protocol.

However, if you think realistically, you’ll see that your application and database layers are not even close to being separate entities. They are so tightly coupled you cannot make any changes to one infrastructure without affecting the other.

Read More:   Red Hat’s Perspective – InApps Technology 2022

A Brief History of Application-Database Affinity

Applications and databases have been firmly entwined since the beginning. Unlike “clean” HTTP, database vendors introduced proprietary extensions and implementations of the SQL language. Thus, the SQL language created the unification, requiring a persistence-related application change needed to be reflected in the database and vice versa.

Everything that happened on one tier needed to be reflected in another. When both systems were run on the same server, this accelerated processing, as the information didn’t have to travel very far.

Breaking Up Is Hard to Do

The advent of the cloud changed the dynamics completely — the promise of on-demand scalability could be achieved — except it really isn’t happening.

It’s still nearly impossible to decouple the application server from the database. A single change to one can have adverse effects on the way the two tiers communicate. For example, a change in the database may “break” connectivity to something in the application. SQL may be set up to serve a specific piece of data when the application calls for a “purchase order number,” but an update to the accounting database changes the purchase order number to a work order number, making it no longer accessible to the application. Analyzing code-to-database connections requires extensive static analysis to find all the couplings before anything can be changed.

Falling Short of a Third Tier

Instead of directly addressing the coupling issues, companies try workarounds during the transition to the cloud. They duplicate efforts, running on-premises and cloud databases in parallel as they gradually transition functionality. They migrate old data to the cloud database and write to both databases. If no issues occur and both databases are in sync, they later switch over read queries.

That’s great, except:

  • It’s complicated to partially roll over data functionality if organizations can’t efficiently route read and write traffic to specific databases.
  • DBAs may not be able to quickly roll back when things go wrong. With data and application logic often tightly coupled, switching to another database during a failover without code changes may lead to application errors. This makes migrations risky for organizations that can’t modify their data infrastructure in real-time.
  • Successful database migration requires reliable monitoring and centralized login capabilities. IT teams that lack transparency during a database migration — no insights into performance or a clear audit trail for accountability — cannot transition successfully.
  • The SQL-derived tight coupling between databases and applications requires extensive code changes to the applications each time changes are made to the underlying data storage, increasing the wariness of taking on cloud migration.
Read More:   New Event-Driven Automation Use Cases for Puppet’s Relay – InApps 2022

You CAN Achieve True 3-Tiered Architecture

The secret is load balancing, which plays an important role in the decoupling process. It serves as an “automatic” translator, a middle layer that keeps the communications channels open between the database and the application but completely bypasses the tight coupling, allowing each to scale or migrate as necessary. This simplifies the ability to manage the database-application couplings without causing damage to either. With true three-tiered architecture, your database can easily and quickly “speak” to multiple applications, and your applications can speak with multiple databases.

With database load balancing, each challenge can be addressed to truly benefit from cloud scale. Database load balancing enhances distribution of workloads across multiple servers. Migration can be eased using intermediary control planes, which translate SQL commands to cloud-active operations without changing a single line of code. Database load balancing decouples applications from their storage, drives operational simplicity, and unlocks massive TCO reductions. The end result is safe migrations with zero downtime by being able to switch reads to the cloud server and switch writes at any time.

Once migration has occurred, keeping the database load balancer in place provides long-term scalability and stability, ensuring zero downtime during maintenance and reducing the risk of unplanned outages.

A database load balancing infrastructure:

  • Enables developers to point the application toward a single connection and interact with data without worrying about the database infrastructure.
  • Understands multiple SQL dialects, allowing dynamic traffic routing without compromising transactions, and reduces or eliminates downtime, reducing costs.
  • Analyzes traffic spikes to allocate resources accordingly and minimize the impact on end-users, keeping mission-critical business applications running efficiently by ensuring data is available and accessible.
  • Allows for safe and quick rollback of a migration should any issues occur with a particular database. DBAs can configure failover criteria to automatically reroute traffic from faulty database nodes without resulting in application errors.
Read More:   Update The Classes of Container Monitoring

In the long-run, database load balancing makes it easier to manage operations, enabling:

  • Consolidation of thousands of schemas on a single cloud database cluster, which simplifies operations and reduces costs, delivering 10x the performance at a tenth of the price.
  • Scaling to hundreds of thousands of transactions per second — Smoothly migrate to larger instance sizes or seamlessly use read replicas at scale.
  • Real-time visibility, with rapid identification of — and instant response to — problem queries.
  • Faster app rollout along with ample incremental revenue and cost savings from avoiding downtime, improved website performance and reduced development time.
  • Effective balancing of read and write traffic to dramatically improve overall database throughput.
  • Consolidated database analytics in a single platform for smarter, more efficient, time- and money-saving decision-making.
  • Transform your two tiers and really adopt three-tier architecture — load-balancing will help get you there.

InApps Technology is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Real.

Feature image via Pixabay.