The challenges in the financial sector are substantial: new tech-driven challengers in FinTech are putting pressure on well-established players. Changes in regulation like GDPR or PSD2 are forcing companies to take a long, hard look at their own processes and supporting technologies. And finally, the new customer is a digital native, expecting financial services to be so as well.
It’s time to evolve into a digital, data-driven organisation, a message that many companies are embracing right now. But while the financial sector was changing, so was the software industry.
Birth of an ecosystem
The adoption of digital services has caused many service providers to be swamped with data, which is now considered to be a good thing. Data represents opportunity: the opportunity to get to know your customers better, anticipate on their collective and individual needs, to validate new business models and so on.
Technology leaders like Google, LinkedIn and Twitter were among the first to experience these massive usage spikes and data torrents, and they struggled. Technologies and architectures were not designed to deal with loads and volumes as experienced. Twitter had the infamous “fail whale”, an indicator of their unavailability.
These companies soon began developing custom systems, software uniquely designed to deal with their specific issues. It gradually became clear that batch systems alone would not be able to deal with the ever-growing volumes of data and the need for faster insights. After several iterations and several years of running in production at the frontier of the internet, these systems were either open sourced, or their mechanics were used to create new open source initiatives. Some famous examples here are Hadoop and Kubernetes, based on several internal Google projects, Kafka, open-sourced by LinkedIn and Storm, open-sourced by Twitter.
Once open-sourced, many of the projects gathered a large community of engineers working on new features, stability and interoperability between the different tools. They found adoption in the more traditional companies as well, who realized that a new kind of data infrastructure was required in order to realize the potential of new possibilities in analytics, machine learning and AI.
Where IT departments once relied on a couple of lone vendors to supply an integrated toolbox, they now must set up a complete ecosystem of servers, clusters and services.
Economics of scale
Are you able to do this by yourself? Yes, you can! Every important component is open source, and most of them already have an enterprise offering as well. But before you go off building, there are some important questions you need to consider.
1. Do you have the people?
Identifying the right architecture, the right components and technologies, building applications on top of them, monitoring and interpreting the whole platform are some examples of skills you need to have in your organisation. The technology has only been around for a couple of years, and chances of finding expertise are small. Trends like (Sec)DevOps, containerization and cloud have a large impact on data infrastructure, so keep an eye open for those skills as well.
This leaves you with a need for data engineers, ML engineers, cloud architects, DevOps engineers, Data Scientists, … The demand is high for these unicorn profiles, so it might be hard to find them.
2. Do you have the time?
The minimal time required to deliver your software project is your development time: developers frantically hammering code into their machines, administrators spitting out configurations, …
But this is an ideal scenario, assuming all resources are directly available, there is no learning or experimentation phase, there is a clear alignment between your architects and a lot of departments within the IT organization, … All these factors have the potential to slow your efforts down considerably. And think about it: this is all time you’re not spending directly on growing your business!
3. Do you have the money?
Since the total time of your development track is unpredictable, the associated cost will be unpredictable as well. And there are other costs as well: tending to security updates and new versions, assigning people to monitoring and maintenance duties, building any lacking skills in your organization, research, alignment,…
How can we help?
At AE, we have noticed that “data driven” projects are often bogged down due to one of the reasons above, leading to either severe delays in delivery or even project cancellation. It was clear that there are customer needs where traditional consultancy (typically time and material) is not the right tool for the job. However, we do have the necessary parts to provide this tool:
- We have all the required skills to deliver, ranging from core information management, analytics, integration and application lifecycle management to the latest in stream processing frameworks. We know the ins and outs of these technologies and can interpret how they tick.
- We have built components in advance, so we can assemble a fit-for-purpose platform, addressing the individual customer's needs. This allows us to instantly deliver the required infrastructure. The only development left is building the actual pipelines!
- In following our own best practices and expertise, our available components come at a predictable effort and cost. This allows us to give you a very precise and predictable estimate of what your TCO would be!
This new answer to customer needs took shape as a managed ecosystem, which was named “kuori” (Finnish for ‘cortex’, a.o. the central processing highway of our brains). We offer it managed: we build, deploy, run and maintain it, all in a cloud provider of your choosing, or on premise. You can expand or scale down dynamically, with a menu of ‘building blocks’ to select from. As such, our ecosystem always stays fit-for-purpose and you only pay for what you need at that moment, while at the same time you have the confidence the ecosystem will grow with you, both in scale as in functionality.