I make my living in the software business. I started out as a software developer, moved up to systems engineering and product management, and then on to solutions architecture.
It has made for a great career, but I’ll let you in on a little secret. My true passion is cars and racing. That’s right, I’m a race car guy at heart.
Spending time in both worlds, I started thinking about the changing requirements we often get when we’re creating applications or solutions. Having built and modified a few cars of my own, I also considered what a similar approach would be like for teams building race cars. Challenging is the word that comes to mind. Here’s why.
Let’s say our sponsor comes to us with a simple directive. “Build the fastest car you can.” So, you go and build a top-fuel dragster. Designed for short, straight-line races from a standing start, your light, high-powered car has the quickest acceleration in the world, reaching speeds of over 339 mph (546 kph) in less than three seconds.
In the world of containerized apps, that raw power and speed is the equivalent of major scaling capabilities.
It’s good, but the sponsor wants more. They want the car to be able to handle curves in the track, so the aerodynamics, suspension and steering need to be different. And it needs to be able to handle much longer races, so it needs to be fuel efficient. It also needs real-time monitoring of things like its tires, brakes, clutch, transmission, etc.
You’re essentially being asked to turn your dragster into a Formula One race car. In software, it’s the equivalent of handling completely different functions and workloads by adding all new functionality.
You go back to the drawing board in your garage to see if there’s a way to modify your dragster to meet the new requirement. But as any car person will tell you, it’s just not possible to morph a dragster into an F1 kit. So, you start from scratch and build a car that Jackie Stewart would be proud of.
The sponsor is happy for a few minutes, but then comes back with the need for your car to compete in the Pikes Peak International Hill Climb Race (a rain or shine event). So, you need to retool with an engine that can handle among other things the rapidly descending air density on a track with rapidly rising altitudes. Or is electric the way to go? Last year, a new course record was set by an electric car. If your experience so far has been in fuel cars, how will you handle the very different challenges of electric design?
You get the idea; I don’t want to overdo the car-building analogy. But if you and your team are building applications and solutions today, you’re likely having to deal with these types of fundamental, unanticipated requirement changes. Maybe its competitive pressures or advantage, an acquisition, maybe a complete pivot to fundamentally new technology.
In the old days, these sorts of radical changes in direction would utterly disrupt the development process. But today, using cloud-native architectures and dynamically assembled microservices, one can change a dragster app into an F1 solution, and then pivot your design so the resulting vehicle can get to the top of the mountain first.
But here’s the rub. You can’t do any of this type of transformative work without understanding what the new direction will require in terms of performance and architectural changes. What was once a finely-tuned and largely upfront effort, now needs to be tuned again. What are the peaks and valleys of high traffic, throughput, latency, or concurrent users going to do to your application? How do you best optimize for those scenarios? When the pivot comes, how do you yet again validate the proper optimization? Is the new architecture eveIn relevant to the new tasks at hand?
What if your toolkit included a solution that simplifies and automates the process of continuous Kubernetes optimization?
– Brad Ascar Sr. Solutions Architect, Carbon Relay
The current pandemic has challenged us all in ways that seemed unimaginable just a couple of months ago. At Carbon Relay, we’re aware of how fortunate we are to be able to do our jobs remotely, continue advancing our technology and remain connected with our communities. As our company grows, we have a responsibility to use our resources — including the company’s time, energy, funding and influence — to affect positive change when and where we can. This has been part of our collective mission since we founded the company and it’s more important than ever today.
Many of our employees are based in Boston and Washington, DC, but we’ve all watched with great concern as residents of New York have suffered shockingly high levels of COVID-19 confirmed cases and deaths. In the face of this catastrophe, we asked ourselves what we might do to try to support New York in a way that has immediate impact and that may bring some small measure of relief to the individuals who are risking their own health to assist others.
Carbon Relay is supporting One Million Masks, a grassroots non-profit organization working with the New York startup community to get one million masks from trusted factories to emergency rooms and intensive care units. The organization initially focused on NYU Langone Medical Center and New York-Presbyterian, but has since expanded its work to include all New York City area hospitals, as well as nursing homes, jail complexes and other correctional facilities. It is also offering limited shipping to verified healthcare workers across the U.S.
1M Masks arose to take on an issue that is, at its core, a monumental supply chain failure. Each of these two large New York hospitals goes through well over 10,000 masks per day, and with traditional supply chains for personal protective equipment bottle-necked, getting masks to the people who urgently need them proved too difficult a task for overwhelmed hospital administers. The goal of 1M Masks is to put supply chain technology experts up against the immediate problem of securing masks, and then accelerating their delivery directly to medical centers — and to cover all associated expenses.
Some of the organizations behind 1M Masks have focused on working through supply chain and logistics issues, but donations of cash are also needed. 100% of the funds raised go to the purchase and distribution of PPE for healthcare workers.
We’ve been able to provide financial support for 1M Masks and invite others to learn more about the organization and consider supporting it as well. To date, the group has shipped over 280,000 masks, of which 140,000 have already been delivered into the hands of front line healthcare workers. If you’re in a position to help or would like to learn more, please visit One Million Masks NYC.
I am really excited to join the Carbon Relay team as chief sales officer and wanted to take a few moments to share why.
After eight very fulfilling years at Acquia, I was looking for a rapidly growing company that I could help accelerate. For anyone who has been fortunate enough to participate in building successful businesses there are a few common traits that you look for. Those are clearly in place at Carbon Relay. Let me explain more.
Firstly, the team. This team is really smart. As a go-to-market leader, it’s wonderful to find a company that is made up of exceptionally clever people. The combination of product management expertise, machine learning scientists and software engineers is impressive. Related to this is the culture that permeates through the organization: integrity, trust and empowerment. On top of that is the unwavering support of Insight Partners, which has made a strong commitment to the firm. Having their expertise to draw upon and collaborate with only reinforces the opportunity.
Secondly, the technology. Red Sky Ops was born out of the very real challenges that arise in operating and running Kubernetes at scale. What began in the lab with countless iterations, has emerged as an incredibly innovative technology that uses machine learning to continuously tune and optimize applications running on Kubernetes. It’s that algorithm, along with our team’s deep understanding of use of Kubernetes in production, which is the competitive advantage we bring to the market. And as we grow Carbon Relay and more organizations participate, they will have the opportunity to benefit from the increased learning of the product.
Thirdly, the timing. Since Google released Kubernetes as open source through the Cloud Native Computing Foundation in 2015, the platform has been rapidly adopted by organizations around the world. It delivers unmatched scale and reliability of workloads. Today the industry is seeing the steep rise in the numbers of organizations participating and adopting Kubernetes. For just one measure of the enthusiasm around Kubernetes, consider that the KubeCon+CloudNativeCon event in San Diego last year attracted 12,000 attendees, a 50% increase on the previous year.
At Carbon Relay we have a tremendous opportunity to help organizations around the world intelligently optimize their Kubernetes application performance. This will mean that application service level agreements can be met, that inherent risks in scaling operations can be mitigated and that developer productivity can deliver far higher yields. On top of this, costs can be optimized to ensure efficiency and effectiveness. In looking forward, we envisage a world where every customer gets value from including Red Sky Ops as a step in their CI/CD pipeline process: every application tuned and optimized intelligently and easily.
As we build the sales team, it’s clear that the opportunity is enormous. And my commitment is to build a team that is courageous, trustworthy and energized. I am confident that we can achieve tremendous success for our customers, partners, investors and team mates. I look forward to having you join me in the journey.
Managing applications running in Kubernetes can be far more complex and time-consuming than most DevOps, networking and IT professionals expect. The platform’s flexibility is both its strength and weakness. Kubernetes allows experts to tune it to support their organization’s needs, yet it can be so tough that many teams find it frustrating to the point of being unmanageable. Teams are often forced to opt to dramatically overprovision compute and storage resources to ensure application performance, running up unsustainable costs.
To address this complexity, Carbon Relay created Red Sky Ops, an AIOps platform for deploying, scaling and managing containerized applications in Kubernetes environments. It uses machine learning to automatically determine the optimal configuration for apps running in Kubernetes, eliminating the need for ineffective manual optimization.
Using ML-powered experimentation, Red Sky Ops explores the application parameter space, resulting in configurations that both deploy reliably and perform optimally—a nearly impossible task for even the most capable DevOps teams to undertake by hand. We also created Red Sky Ops to learn over time, allowing the platform to become even more efficient over time.
Now, Carbon Relay is collaborating with the IBM Cloud Kubernetes Service, a complete managed container service, to tackle the Kubernetes complexity challenge head-on. I’ve worked with IBM’s Chris Rosen, program director, offering management of the IBM Kubernetes Service, to describe in detail the work we’re doing together to help deliver on the vision of Kubernetes.
In Turning a Glimpse of Kubernetes’ Future into Reality, Chris and I describe the collaboration between IBM and Carbon Relay, and how we’re providing enterprises with new and effective ways to use Kubernetes to achieve their business goals—reliably, efficiently, and flexibly.
We’re excited to announce that from November 18-21, we’ll be at KubeCon + CloudNativeCon 2019 in San Diego. As proud CNCF members, we’re looking forward to joining the Kubernetes community in advocating for the advancement of cloud-native technologies. If you’re attending as well, stop by our booth to learn more about Carbon Relay and how we’re applying AI and machine learning to Kubernetes configuration management.
We know first-hand the challenges of managing containerized applications, and that’s why we built Red Sky Ops, the first AIOps solution specifically designed to make DevOps pros’ lives easier by automatically identifying and implementing the optimal settings for any containerized application, on-premise or in the cloud.
Over the last few months, we’ve steadily enhanced Red Sky Ops’ AI to learn even faster, to better support applications deployed in complex environments. Visit us at KubeCon + CloudNativeCon to see how Red Sky Ops can address the challenges your DevOps team may be experiencing.
This year, Carbon Relay has seen incredible growth—first with our launch, then with the unveiling of Red Sky Ops and most recently through our integration with Helm to support charts. We’re excited about what comes next. Make sure you visit us in the exhibition hall to learn about all of our new integrations and features planned for the coming year.
See you there!
Our VP of AI & Machine Learning, Ofer Idan, breaks down the multidimensional chess game that is configuration management in Kubernetes. Today DevOps & IT teams lose the game too often, but our new Red Sky Ops AIOps solution can help.
Against the backdrop of cloud technologies going mainstream, the enterprise IT migration to containerization in general, and to Kubernetes in particular, is well underway. Some organizations are making the move in response to competitive pressures and the need for greater business agility. Others are making the switch for economic reasons; they want more cost-effective IT operations and see Kubernetes as a smart way to get there.
This momentum of this push to Kubernetes is understandable. Its benefits are too compelling to ignore. For IT operations, it makes applications more portable and scalable than alternatives, simpler to develop, and easier, faster and cheaper to deploy. Essentially, Kubernetes enables companies to support their growth and change in nimble, efficient and cost-effective ways.
That’s the promise. But the reality is that DevOps and IT teams in many organizations still can’t quite get their Kubernetes-powered operations to “fly right.”
The reason is the system’s complexity. This stems partially from the flexibility of Kubernetes, which gives teams seemingly endless options and choices. However, that flexibility morphs into complexity as teams initially work to get their clusters up and running. With their clusters up but applications not performing to their liking, teams then try to tune their apps. That’s when they really hit the complexity wall with Kubernetes.
For organizations that are early in their Kubernetes journey, this complexity makes it difficult for their teams to get applications to deploy reliably and have consistently high performance. For enterprises that are further along in their Kubernetes migrations, complexity is what’s preventing them from realizing their anticipated cost savings.
As for software products that help teams get over their Kubernetes complexity hurdles, the options have been limited. There’s no shortage of services for deploying Kubernetes clusters, and products for monitoring application performance. But to date, there have been no available solutions specifically designed for optimizing how applications run in Kubernetes environments.
Without software-driven options, DevOps and IT teams have tackled it old fashioned way — manually using trial and error. They change one or two variables, then nervously wait to see the impact. Often it’s unclear why changing “A” caused “B” to break, so they keep on tinkering. For businesses where application performance is paramount, such as with SaaS companies or MSPs, their teams often default to costly overprovisioning.
Hence, the complexity-related problems cited above. Some of these occur at the cluster level, like having to decide how large to make nodes and how many of them to create. But many more problems crop up at the application level.
As an example, let’s look at a web app such as an e-commerce site. Minimizing latency is critical for a smooth user experience, so that is a key consideration. To achieve that goal consistently, the app needs to be tuned properly.
When the app is deployed in Kubernetes, it’s up to DevOps or IT team member to select the number of instances, and choose how much CPU, memory, and other types of resources to allocate to each instance. Allocate too few resources, and the app can slow down or even crash. Allocate too many resources, and suddenly the cloud costs skyrocket. Figuring out the “just right” configuration settings, and doing so quickly, accurately and consistently for a growing roster of apps, is a tall order.
The fact is, configuration management in Kubernetes is a multidimensional chess game, and one that DevOps and IT teams are losing too often. To win, and do so consistently, they need a better way forward.
There’s good news for DevOps and IT teams that are presently wrestling with Kubernetes’ complexity. A new, software-driven approach for handling the basics of application configuration in Kubernetes environments has emerged. Powered by advanced machine learning, this new approach eliminates most of this complexity by automatically determining optimal application configuration parameters.
These technologies, which build upon established methods in data science, allow DevOps teams to automate the process of parameter tuning, thereby freeing them to focus on other mission-critical tasks. Using machine learning-powered experimentation, these platforms allow for efficient exploration of the application parameter space, resulting in configurations that are guaranteed to both deploy reliably and perform optimally. As with all powerful ML techniques, the ability to learn over time plays a crucial role in making the process scalable and more efficient. With the help of these technologies, teams can rest assured that development and scaling of their applications will fit naturally into the optimization process, which will become more intelligent with time.
In short, ML-powered approaches for deploying, optimizing, scaling and managing containerized applications in Kubernetes environments are coming into the spotlight. They are proving themselves by intelligently analyzing and managing hundreds of interrelated variables with millions of potential combinations to automatically select the optimal settings for each application.
With our web app example, rather the DevOps team struggling to determine the best parameter values for their app, with these new basics of config optimization, the team gets optimized parameters delivered to them automatically. In addition, the organization and its customers both benefit from a more reliable, high-quality user experience.
It’s all about high performance and reliability with cost-efficiency. By enabling easier and more effective deployment of applications, and ensuring that they are properly resourced and optimally configured, the new, ML-based approach will be a catalyst that creates even more Kubernetes adoption and success. And that’s a very good thing.
Article featured on The NewStack: https://thenewstack.io/the-new-basics-of-configuration-management-in-kubernetes/
Ready to take your Kubernetes environment to the next level? Schedule a demo with our team.