• Home
  • >
  • DevOps News
  • >
  • Making Manageable Microservices on AWS – InApps 2022

Making Manageable Microservices on AWS – InApps is an article under the topic Devops Many of you are most interested in today !! Today, let’s InApps.net learn Making Manageable Microservices on AWS – InApps in today’s post !

Read more about Making Manageable Microservices on AWS – InApps at Wikipedia



You can find content about Making Manageable Microservices on AWS – InApps from the Wikipedia website

The allure of microservices is that you can break down monolithic applications into smaller, more manageable chunks. A great idea, to be sure, but how would an enterprise implement this new architecture?

Attendees at Hashicon 2015, held in Portland, Oregon, recently got a briefing on this new approach from Chris Munns, who is Amazon Web Services’ business development manager for DevOps.

When considering if an organization should transform its larger application into a microservice-based one, designers should first establish policies, rules, and a pattern to how tasks will be designated, Munns explained.

When one considers the traditional developer pipeline as a whole, there can be a large discrepancy between coding an application and the amount of work involved throughout the rest of the pipeline to support that application. This discrepancy can set a project back.

If a project’s hand-off demands heavy processes in the middle of production that bog down the amount of time between when code is finished to the final product, microservices may offer a solution to automate more of the work taking up this valuable time.

When to Use Microservices, and When Not To

At its core, the microservice architecture is a simple idea: Each microservice executes a single task, and they all communicate with one another over API calls.

This approach however can pose some limitations, in terms of moving an existing application or company process to this new model.

Read More:   Update TigerGraph: Graph DBs to Become a ‘Must-Have’ in 2022

Munns notes that those applications relying on large databases might be difficult to move to a microservice-based approach, given that it can be difficult to move the database tables or segment it into smaller pieces without breaking its components.

There are other considerations as well.

With larger projects, team members may have left the company, leaving behind overlapping concepts or parts of code, making it difficult to assess who is responsible for owning part of a project when bugs arise.

Munns stressed throughout the presentation that if one’s technology is blocking their company from moving forward in automating their services, this is most likely caused by feature debt somewhere in their code base which should be evaluated.

“Microservices are a pattern. Define your standards early on, then adopt them. Pick a standard and standardize on it. Automate everything.” — Chris Munns, Amazon.

When building multiple microservices, coordination can become difficult across teams. When making the switch, software development teams must clearly define who is responsible for which microservice, thereby ensuring accountability is equally shared across the entirety of a team.

Munns noted that smaller teams may rely more heavily on reusing code and sharing modules to accomplish more with less. When using microservices, processes such as monitoring, metrics, logging, and security can present much more of a challenge to small teams. Munns encourages these teams to performance-tune their application before discussing refactoring to handle loads at scale, given that many who consider this process do not actually need to do so.

When establishing a pattern for working within microservices, Munns offered the following questions to help teams outline how to best address use cases before starting to build a microservice:

  1. How can clients communicate with one another?
  2. How will one handle cross-service authorization?
  3. How do services prevent abuse?
  4. How can continuous integration be built into the development process?
  5. How will service discovery be handled?

Automate the Pipeline

Using an API gateway greatly reduces many of the common pain points associated with working with microservices, providing benefits and enacting certain constraints that push toward good practices, Munns said.

Read More:   Update NetApp Astra Brings a Data-First Approach to Kubernetes Deployments

AWS offers its own API gateway service, which offers clients the ability to host multiple application versions, distribute API keys, throttle requests, cache requests, generate content for software development kits (SDKs) and more.

A common practice when building microservices is to push code from Swagger, a popular API framework, into a manifest API, then into an API gateway.

From there, a developer can generate an SDK in any number of languages, bundle the SDK, and make this available for other members of their development team to work against. Munns notes that one could easily automate the rest of this workflow, stressing that when at all possible, automation is crucial in development.

When working with applications that require steady updates or the ability to track version changes, updates or dependencies, it is important to use tools that can make this process more efficient.

Versionize offers a way to accomplish these goals, in that it tracks code base dependencies. Versionize can then notify other developers that there is a new version of the service available with new dependencies or dependencies which need updating.

Using an API gateway removes many of the above concerns from being part of the application development process. If one ensures that every developer working on a project follows a set pattern, microservices can be developed with relative ease.

EC2, Containers and Lambda

Many teams managing microservices on AWS are running them using Amazon’s EC2 instances, Munns said. When working with microservices, it is best to deploy only a single microservice in a container. Scaling can become an issue if multiple microservices are packed into a single container, as one service may require more system power then another, which could lead to resource starvation for other services. With that in mind, having multiple microservices in one container is best undertaken by setting system use limits or spreading services across a cluster of nodes.

Read More:   Update KOps Adds Support for Calico’s eBPF Dataplane

When developing a service catalog, one can create a pre-defined environment and scale it quickly using Terraform, Munns noted. EC2 offers dependability at scale, with a variety of tools and workflow enhancements to support it.

Larger EC2 instance types mean that developers have more system power to run services, with less operational overhead running at scale. Munns notes that developers can use Consul to point services at one another, allowing for more streamlined system discovery.

“If you’re running into a situation where you’re managing the container manager more than your services, that’s an issue.” — Chris Munns, Amazon.

Another option would be to use containers. Containers technology is still relatively new. While multiple containers can reduce the burden on a system, there can still be the issue of running too many servers to coordinate all the microservices efficiently.

The AWS Lambda service offers a solution to this, providing much needed automation for this segment of the development process.

AWS Lambda manages EC2 instances with ease, Munns asserted. Lambda takes care of logging, patching, and other tasks so that developers are not constantly having to monitor an instance. Lambda can deliver executions without pre-provisioning. It can trigger functions from push notifications from S3 (Simple Storage Service) and other internal AWS services. Lambda functions can also be triggered from outside events, such as mobile or browser data streaming, into Amazon Kinesis or DynamoDB.

Here’s an example of Lambda in action: SquirrelBin is a serverless web app orchestrated in AWS Lambda. Munns noted that the frontend is written in Javascript, accessed in S3 through one’s browser. The Javascript browser then communicates with the API gateway, which continues the chain to communicate with Lambda, which in turns executes a function to communicate with DynamoDB.

SquirrelBin

SquirrelBin is just one example of an automated application workflow with Lambda and microservice management. Overall, AWS Lambda presents a powerful way to manage microservices, while offering software developers a better way to design, handle and prioritize their project pipelines.

Feature image: “Spiderweb Reflecting Light” by Thomas Leth-Olsen is licensed under CC BY 2.0.



Source: InApps.net

Rate this post
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      Success. Downloading...