Infrastructure

The key pillars of legacy migration and digital transformation

30 June 2023 • 5 min read

Legacy migration - an old computer

Migrating legacy applications can feel like a daunting challenge to most organisations, and for many, it may feel unnecessary or indulgent. If things are working 'okay' as they are, why change?

The reality is that fear or plain indifference could be a huge barrier to business growth and agility. Those organisations that have successfully moved from a monolithic architecture towards a modern, cloud-based, composable microservices one feel the benefits in a huge number of ways.

Part of the problem with any conversation regarding migration is that it can feel so vague and difficult to articulate and plan. It can also seem overwhelming and impossible to break down into logical components and services. So, whilst acknowledging that every organisation's challenges, ways of working and legacy processes will be different, we can look at what best practice and good modernisation looks like in a migration journey.

 

Cloud

Cloud concepts are not particularly new - but they have evolved (and continue to evolve) at an ever increasing pace. Cloud's importance in modern architecture can’t be overstated, in that it acts as a foundation for a new way of designing, building and maintaining applications and services. While there’s some debate about just how cost effective it is, when managed efficiently it will not only save you money, but will ultimately help you to power growth in ways that you didn’t think possible before.

 

There are some fantastic public cloud offerings to explore, but it's critical that you go into the decision of which vendor with a clear strategy. Are you selecting a single platform, adopting a hybrid approach or opting for a multi-platform architecture? There's no right or wrong answer; but the architecture, tools and methods need to be consistent with the approach, whether that’s hybrid (cloud and on-premise / managed data centre), multi cloud or a single consolidation cloud platform.

 


 

Learn more about effective digital transformation through cloud.

 

Get the Ebook

 

DevOps pipelines for rapid, reliable software delivery

Although DevOps doesn’t need cloud, the two are inextricably linked. This is because developing on cloud inevitably makes it easier to move from the process of development to deployment. Indeed, most of the leading cloud vendors enable DevOps working with a range of features, such as continuous integration and continuous delivery, and infrastructure automation. Rigorous and robust DevOps (or DevSecOps) engineering principles and execution are the key foundations for the speed and quality associated with modern application development.

 

Finding the right balance between releasing new features and making sure that they are reliable for users is critical and as systems become more operational a SRE (Site Reliability Engineering) approach can be adopted to automate operational tasks and solve problems.

 

Containers and microservices

Containers have been one of the most important trends in software development of the last decade. They naturally align with DevOps processes insofar as they allow software to be developed and deployed in a way that is modular and more loosely coupled. In turn, this has helped make microservices a dominant architectural model; where an application is composed of many specific, self-contained and separate ‘services’, rather than being one huge intersecting monolith.

 

This has profound implications for how we think about business more broadly, as it means we can focus on things like product management, features and optimisations in a more focused way - which is very difficult in heavy legacy systems.

 


 

Read next: How technology can improve supply chain resilience

 

 

Event driven architecture

Event driven architecture hasn’t become a headline in the way that, say, cloud or DevOps has. However, as a way of thinking about your architecture in an age of microservices, it’s crucial. Where monolithic legacy architectures were typically built as procedural code or request-driven (ie. someone requests a transaction or a piece of information and the system responds accordingly), an event driven model treats interactions with software architecture as things that happen in real-time and asynchronously..

Typical request / response patterns imply a wait between the request being made and the response being received in a synchronous (blocking) pattern, whilst events are sent without waiting for a response, and are consumed by receiving applications when and how required. This naturally aligns with microservices and the loosely coupled nature of the components and services that make up this architectural approach.

It allows components to be designed, developed, deployed and operated independently, which includes changes and scalability.

 

Automation and infrastructure-as-code (IaC)

Moving towards an event-based microservices architecture can add some additional complexity, even if it removes it in other domains.

This is where infrastructure automation comes in - by treating your entire architecture and infrastructure as code (Everthing As Code) - something that can be developed, changed and automated - it frees you from the limitations and restrictions of monolithic legacy systems, e.g. one of the most significant constraints associated with legacy / monolithic applications is test environments. However, in a cloud environment where everything has been developed as code, a new environment can be created, together with the application, as required. It can be made available immediately and removed once testing is complete.

 

API integrations

Finally, at the heart of this modern approach to system infrastructure/architecture are API’s, offering a standard mechanism for offering and consuming services from an ever expanding ecosystem. API’s provide a mechanism for connecting and integrating services and data within an organisation, but also facilitates greater integration with external systems and sources of data.

By using APIs you may be able to power your products and services by orchestrating services and accessing richer data, ultimately delivering more value - either to your customers or to your organisation.

 

The pitfalls: don’t confuse lift and shift with architectural transformation

There will always be different imperatives and priorities driving legacy migration. Sometimes short term tactical approaches may be necessary and sensible, but don’t confuse ‘lift and shift’ to a cloud platform with a broader transformation and modernisation project.

There are often advantages to migrating from physical data centres and servers to the cloud, but without re-architecting the infrastructure and applications to optimise the capabilities that cloud, container, DevOps automation and API-centric applications bring, there’s a limit to how far that will take you.

Many quick win solutions to legacy transformation are simply ‘lift and shift’ and do not unlock the real value, reduce the constraints or power agility and speed to market. Some examples of projects that may provide a stepping stone towards cloud adoption, but are not truly transformational, and do not deliver the benefits of native cloud applications and services, include:

  • Virtualising physical servers and deploying to a cloud platform
  • Transforming the code base, e.g. COBOL to Java, C# or .Net
  • Adding a modern UI/UX veneer to the underlying legacy application
  • Adding API interfaces to the existing legacy application


Even when applications are refactored for the cloud, if they are written and architected in the same way as they were previously, they are likely to become tightly coupled and a future monolith. Beware creating tomorrow's legacy applications today!

 

Infrastructure

Related Posts