changing featured case studies (#10092)
This commit is contained in:
parent
5336022bef
commit
bf343d81e9
|
@ -5,9 +5,9 @@ case_study_styles: true
|
|||
cid: caseStudies
|
||||
css: /css/style_case_studies.css
|
||||
logo: adform_featured_logo.png
|
||||
draft: true
|
||||
draft: false
|
||||
featured: true
|
||||
weight: 1
|
||||
weight: 47
|
||||
quote: >
|
||||
Kubernetes enabled the self-healing and immutable infrastructure. We can do faster releases, so our developers are really happy. They can ship our features faster than before, and that makes our clients happier.
|
||||
---
|
||||
|
@ -35,15 +35,15 @@ quote: >
|
|||
|
||||
<h2>Solution</h2>
|
||||
The team, which had already been using <a href="https://prometheus.io/">Prometheus</a> for monitoring, embraced <a href="https://kubernetes.io/">Kubernetes</a> and cloud native practices in 2017. "To start our Kubernetes journey, we had to adapt all our software, so we had to choose newer frameworks," says Apšega. "We also adopted the microservices way, so observability is much better because you can inspect the bug or the services separately."
|
||||
|
||||
|
||||
|
||||
|
||||
</div>
|
||||
|
||||
<div class="col2">
|
||||
|
||||
<h2>Impact</h2>
|
||||
"Kubernetes helps our business a lot because our features are coming to market faster," says Apšega. The release process went from several hours to several minutes. Autoscaling has been at least 6 times faster than the semi-manual VM bootstrapping and application deployment required before. The team estimates that the company has experienced cost savings of 4-5x due to less hardware and fewer man hours needed to set up the hardware and virtual machines, metrics, and logging. Utilization of the hardware resources has been reduced as well, with containers notching 2-3 times more efficiency over virtual machines. "The deployments are very easy because developers just push the code and it automatically appears on Kubernetes," says Apšega. Prometheus has also had a positive impact: "It provides high availability for metrics and alerting. We monitor everything starting from hardware to applications. Having all the metrics in <a href="https://grafana.com/">Grafana</a> dashboards provides great insight on your systems."
|
||||
|
||||
|
||||
|
||||
</div>
|
||||
|
||||
|
@ -73,9 +73,9 @@ The company has a large infrastructure: <a href="https://www.openstack.org/">Ope
|
|||
</div>
|
||||
<section class="section3">
|
||||
<div class="fullcol">
|
||||
|
||||
|
||||
The team, which had already been using Prometheus for monitoring, embraced Kubernetes, microservices, and cloud native practices. "The fact that Cloud Native Computing Foundation incubated Kubernetes was a really big point for us because it was vendor neutral," says Apšega. "And we can see that a community really gathers around it."<br><br>
|
||||
A proof of concept project was started, with a Kubernetes cluster running on bare metal in the data center. When developers saw how quickly containers could be spun up compared to the virtual machine process, "they wanted to ship their containers in production right away, and we were still doing proof of concept," says IT Systems Engineer Andrius Cibulskis.
|
||||
A proof of concept project was started, with a Kubernetes cluster running on bare metal in the data center. When developers saw how quickly containers could be spun up compared to the virtual machine process, "they wanted to ship their containers in production right away, and we were still doing proof of concept," says IT Systems Engineer Andrius Cibulskis.
|
||||
Of course, a lot of work still had to be done. "First of all, we had to learn Kubernetes, see all of the moving parts, how they glue together," says Apšega. "Second of all, the whole CI/CD part had to be redone, and our DevOps team had to invest more man hours to implement it. And third is that developers had to rewrite the code, and they’re still doing it."
|
||||
<br><br>
|
||||
The first production cluster was launched in the spring of 2018, and is now up to 20 physical machines dedicated for pods throughout three data centers, with plans for separate clusters in the other four data centers. The user-facing Adform application platform, data distribution platform, and back ends are now all running on Kubernetes. "Many APIs for critical applications are being developed for Kubernetes," says Apšega. "Teams are rewriting their applications to .NET core, because it supports containers, and preparing to move to Kubernetes. And new applications, by default, go in containers."
|
||||
|
@ -106,7 +106,7 @@ Prometheus has also had a positive impact: "It provides high availability for me
|
|||
</div>
|
||||
|
||||
<div class="fullcol">
|
||||
All of these benefits have trickled down to individual team members, whose working lives have been changed for the better. "They used to have to get up at night to re-start some services, and now Kubernetes handles all of that," says Apšega. Adds Cibulskis: "Releases are really nice for them, because they just push their code to Git and that’s it. They don’t have to worry about their virtual machines anymore." Even the security teams have been impacted. "Security teams are always not happy," says Apšega, "and now they’re happy because they can easily inspect the containers."
|
||||
All of these benefits have trickled down to individual team members, whose working lives have been changed for the better. "They used to have to get up at night to re-start some services, and now Kubernetes handles all of that," says Apšega. Adds Cibulskis: "Releases are really nice for them, because they just push their code to Git and that’s it. They don’t have to worry about their virtual machines anymore." Even the security teams have been impacted. "Security teams are always not happy," says Apšega, "and now they’re happy because they can easily inspect the containers."
|
||||
The company plans to remain in the data centers for now, "mostly because we want to keep all the data, to not share it in any way," says Cibulskis, "and it’s cheaper at our scale." But, Apšega says, the possibility of using a hybrid cloud for computing is intriguing: "One of the projects we’re interested in is the <a href="https://github.com/virtual-kubelet/virtual-kubelet">Virtual Kubelet</a> that lets you spin up the working nodes on different clouds to do some computing."
|
||||
<br><br>
|
||||
Apšega, Cibulskis and their colleagues are keeping tabs on how the cloud native ecosystem develops, and are excited to contribute where they can. "I think that our company just started our cloud native journey," says Apšega. "It seems like a huge road ahead, but we’re really happy that we joined it."
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
---
|
||||
title: UK Home Office
|
||||
title: Home Office UK
|
||||
content_url: https://www.youtube.com/watch?v=F3iMkz_NSvU
|
||||
---
|
||||
---
|
||||
|
|
|
@ -3,7 +3,7 @@ title: ING Case Study
|
|||
linkTitle: ING
|
||||
case_study_styles: true
|
||||
cid: caseStudies
|
||||
weight: 20
|
||||
weight: 50
|
||||
featured: true
|
||||
css: /css/style_case_studies.css
|
||||
quote: >
|
||||
|
|
|
@ -4,7 +4,7 @@ linkTitle: Pearson
|
|||
case_study_styles: true
|
||||
cid: caseStudies
|
||||
css: /css/style_case_studies.css
|
||||
featured: true
|
||||
featured: false
|
||||
quote: >
|
||||
We’re already seeing tremendous benefits with Kubernetes—improved engineering productivity, faster delivery of applications and a simplified infrastructure. But this is just the beginning. Kubernetes will help transform the way that educational content is delivered online.
|
||||
---
|
||||
|
@ -84,4 +84,4 @@ quote: >
|
|||
So far, about 15 production products are running on the new platform, including Pearson’s new flagship digital education service, the Global Learning Platform. The Cloud Platform team continues to prepare, onboard and support customers that are a good fit for the platform. Some existing products will be refactored into 12-factor apps, while others are being developed so that they can live on the platform from the get-go. "There are challenges with bringing in new customers of course, because we have to help them to see a different way of developing, a different way of building," says Shirley. <br><br>
|
||||
But, he adds, "It is our corporate motto: Always Learning. We encourage those teams that haven’t started a cloud native journey, to see the future of technology, to learn, to explore. It will pique your interest. Keep learning."
|
||||
</div>
|
||||
</section>
|
||||
</section>
|
||||
|
|
|
@ -4,7 +4,7 @@ linkTitle: Pinterest
|
|||
case_study_styles: true
|
||||
cid: caseStudies
|
||||
css: /css/style_case_studies.css
|
||||
featured: true
|
||||
featured: false
|
||||
weight: 30
|
||||
quote: >
|
||||
We are in the position to run things at scale, in a public cloud environment, and test things out in way that a lot of people might not be able to do.
|
||||
|
@ -32,15 +32,15 @@ quote: >
|
|||
<br>
|
||||
|
||||
<h2>Solution</h2>
|
||||
The first phase involved moving services to Docker containers. Once these services went into production in early 2017, the team began looking at orchestration to help create efficiencies and manage them in a decentralized way. After an evaluation of various solutions, Pinterest went with Kubernetes.
|
||||
|
||||
The first phase involved moving services to Docker containers. Once these services went into production in early 2017, the team began looking at orchestration to help create efficiencies and manage them in a decentralized way. After an evaluation of various solutions, Pinterest went with Kubernetes.
|
||||
|
||||
</div>
|
||||
|
||||
<div class="col2">
|
||||
|
||||
<h2>Impact</h2>
|
||||
"By moving to Kubernetes the team was able to build on-demand scaling and new failover policies, in addition to simplifying the overall deployment and management of a complicated piece of infrastructure such as Jenkins," says Micheal Benedict, Product Manager for the Cloud and the Data Infrastructure Group at Pinterest. "We not only saw reduced build times but also huge efficiency wins. For instance, the team reclaimed over 80 percent of capacity during non-peak hours. As a result, the Jenkins Kubernetes cluster now uses 30 percent less instance-hours per-day when compared to the previous static cluster."
|
||||
|
||||
|
||||
|
||||
</div>
|
||||
|
||||
|
@ -55,7 +55,7 @@ quote: >
|
|||
<section class="section2">
|
||||
<div class="fullcol">
|
||||
<h2></h2>Pinterest was born on the cloud—running on <a href="https://aws.amazon.com/">AWS</a> since day one in 2010—but even cloud native companies can experience some growing pains.</h2> Since its launch, Pinterest has become a household name, with more than 200 million active monthly users and 100 billion objects saved. Underneath the hood, there are 1,000 microservices running and hundreds of thousands of data jobs.<br><br>
|
||||
With such growth came layers of infrastructure and diverse set-up tools and platforms for the different workloads, resulting in an inconsistent and complex end-to-end developer experience, and ultimately less velocity to get to production.
|
||||
With such growth came layers of infrastructure and diverse set-up tools and platforms for the different workloads, resulting in an inconsistent and complex end-to-end developer experience, and ultimately less velocity to get to production.
|
||||
So in 2016, the company launched a roadmap toward a new compute platform, led by the vision of having the fastest path from an idea to production, without making engineers worry about the underlying infrastructure. <br><br>
|
||||
The first phase involved moving to Docker. "Pinterest has been heavily running on virtual machines, on EC2 instances directly, for the longest time," says Micheal Benedict, Product Manager for the Cloud and the Data Infrastructure Group. "To solve the problem around packaging software and not make engineers own portions of the fleet and those kinds of challenges, we standardized the packaging mechanism and then moved that to the container on top of the VM. Not many drastic changes. We didn’t want to boil the ocean at that point."
|
||||
|
||||
|
@ -68,7 +68,7 @@ The first phase involved moving to Docker. "Pinterest has been heavily running o
|
|||
</div>
|
||||
<section class="section3">
|
||||
<div class="fullcol">
|
||||
|
||||
|
||||
The first service that was migrated was the monolith API fleet that powers most of Pinterest. At the same time, Benedict’s infrastructure governance team built chargeback and capacity planning systems to analyze how the company uses its virtual machines on AWS. "It became clear that running on VMs is just not sustainable with what we’re doing," says Benedict. "A lot of resources were underutilized. There were efficiency efforts, which worked fine at a certain scale, but now you have to move to a more decentralized way of managing that. So orchestration was something we thought could help solve that piece."<br><br>
|
||||
That led to the second phase of the roadmap. In July 2017, after an eight-week evaluation period, the team chose Kubernetes over other orchestration platforms. "Kubernetes lacked certain things at the time—for example, we wanted Spark on Kubernetes," says Benedict. "But we realized that the dev cycles we would put in to even try building that is well worth the outcome, both for Pinterest as well as the community. We’ve been in those conversations in the Big Data SIG. We realized that by the time we get to productionizing many of those things, we’ll be able to leverage what the community is doing."<br><br>
|
||||
At the beginning of 2018, the team began onboarding its first use case into the Kubernetes system: Jenkins workloads. "Although we have builds happening during a certain period of the day, we always need to allocate peak capacity," says Benedict. "They don’t have any auto-scaling capabilities, so that capacity stays constant. It is difficult to speed up builds because ramping up takes more time. So given those kind of concerns, we thought that would be a perfect use case for us to work on."
|
||||
|
@ -86,7 +86,7 @@ At the beginning of 2018, the team began onboarding its first use case into the
|
|||
<div class="fullcol">
|
||||
They ramped up the cluster, and working with a team of four people, got the Jenkins Kubernetes cluster ready for production. "We still have our static Jenkins cluster," says Benedict, "but on Kubernetes, we are doing similar builds, testing the entire pipeline, getting the artifact ready and just doing the comparison to see, how much time did it take to build over here. Is the SLA okay, is the artifact generated correct, are there issues there?" <br><br>
|
||||
"So far it’s been good," he adds, "especially the elasticity around how we can configure our Jenkins workloads on Kubernetes shared cluster. That is the win we were pushing for."<br><br>
|
||||
By the end of Q1 2018, the team successfully migrated Jenkins Master to run natively on Kubernetes and also collaborated on the <a href="https://github.com/jenkinsci/kubernetes-plugin">Jenkins Kubernetes Plugin</a> to manage the lifecycle of workers. "We’re currently building the entire Pinterest JVM stack (one of the larger monorepos at Pinterest which was recently bazelized) on this new cluster," says Benedict. "At peak, we run thousands of pods on a few hundred nodes. Overall, by moving to Kubernetes the team was able to build on-demand scaling and new failover policies, in addition to simplifying the overall deployment and management of a complicated piece of infrastructure such as Jenkins. We not only saw reduced build times but also huge efficiency wins. For instance, the team reclaimed over 80 percent of capacity during non-peak hours. As a result, the Jenkins Kubernetes cluster now uses 30 percent less instance-hours per-day when compared to the previous static cluster."
|
||||
By the end of Q1 2018, the team successfully migrated Jenkins Master to run natively on Kubernetes and also collaborated on the <a href="https://github.com/jenkinsci/kubernetes-plugin">Jenkins Kubernetes Plugin</a> to manage the lifecycle of workers. "We’re currently building the entire Pinterest JVM stack (one of the larger monorepos at Pinterest which was recently bazelized) on this new cluster," says Benedict. "At peak, we run thousands of pods on a few hundred nodes. Overall, by moving to Kubernetes the team was able to build on-demand scaling and new failover policies, in addition to simplifying the overall deployment and management of a complicated piece of infrastructure such as Jenkins. We not only saw reduced build times but also huge efficiency wins. For instance, the team reclaimed over 80 percent of capacity during non-peak hours. As a result, the Jenkins Kubernetes cluster now uses 30 percent less instance-hours per-day when compared to the previous static cluster."
|
||||
|
||||
|
||||
</div>
|
||||
|
|
|
@ -5,7 +5,7 @@ case_study_styles: true
|
|||
cid: caseStudies
|
||||
css: /css/style_case_studies.css
|
||||
featured: true
|
||||
weight: 10
|
||||
weight: 49
|
||||
quote: >
|
||||
I would almost be so bold as to say that most of these applications that we are building now would not have been possible without the cloud native patterns and the flexibility that Kubernetes enables.
|
||||
---
|
||||
|
@ -33,14 +33,14 @@ quote: >
|
|||
|
||||
<h2>Solution</h2>
|
||||
Led by the belief that “the cloud native architectures and patterns really give us a lot of flexibility in meeting the needs of that sort of customer base,” Linder partnered with <a href="http://rancher.com">Rancher Labs</a> to build Sling TV’s next-generation platform around Kubernetes. “We are going to need to enable a hybrid cloud strategy including multiple public clouds and an on-premise VMWare multi data center environment to meet the needs of the business at some point, so getting that sort of abstraction was a real goal,” he says. “That is one of the biggest reasons why we picked Kubernetes.” The team launched its first applications on Kubernetes in Sling TV’s two internal data centers. The push to enable AWS as a data center option is underway and should be available by the end of 2018. The team has added <a href="https://prometheus.io/">Prometheus</a> for monitoring and <a href="https://github.com/jaegertracing/jaeger">Jaeger</a> for tracing, to work alongside the company’s existing tool sets: Zenoss, New Relic and ELK.
|
||||
|
||||
|
||||
</div>
|
||||
<br><br>
|
||||
<div class="col2" style="width:95% !important;padding-left:5%;padding-top:2% !important">
|
||||
|
||||
<h2>Impact</h2>
|
||||
“We are getting to the place where we can one-click deploy an entire data center – the compute, network, Kubernetes, logging, monitoring and all the apps,” says Linder. “We have really enabled a platform thinking based approach to allowing applications to consume common tools. A new application can be onboarded in about an hour using common tooling and CI/CD processes. The gains on that side have been huge. Before, it took at least a few days to get things sorted for a new application to deploy. That does not consider the training of our operations staff to manage this new application. It is two or three orders of magnitude of savings in time and cost, and operationally it has given us the opportunity to let a core team of talented operations engineers manage common infrastructure and tooling to make our applications available at web scale.”
|
||||
|
||||
|
||||
|
||||
</div>
|
||||
|
||||
|
@ -69,7 +69,7 @@ Led by the belief that “the cloud native architectures and patterns really giv
|
|||
</div>
|
||||
<section class="section3">
|
||||
<div class="fullcol">
|
||||
|
||||
|
||||
One big reason he chose Kubernetes was getting a level of abstraction that would enable the company to “enable a hybrid cloud strategy including multiple public clouds and an on-premise VMWare multi data center environment to meet the needs of the business,” he says. Another factor was how much the Kubernetes ecosystem has matured over the past couple of years. “We have spent a lot of time and energy around making logging, monitoring and alerting production ready to give us insights into applications’ well-being,” says Linder. The team has added <a href="https://prometheus.io/">Prometheus</a> for monitoring and <a href="https://github.com/jaegertracing/jaeger">Jaeger</a> for tracing, to work alongside the company’s existing tool sets: Zenoss, New Relic and ELK.<br><br>
|
||||
With the emphasis on common tooling, “We are getting to the place where we can one-click deploy an entire data center – the compute, network, Kubernetes, logging, monitoring and all the apps,” says Linder. “We have really enabled a platform thinking based approach to allowing applications to consume common tools and services. A new application can be onboarded in about an hour using common tooling and CI/CD processes. The gains on that side have been huge. Before, it took at least a few days to get things sorted for a new application to deploy. That does not consider the training of our operations staff to manage this new application. It is two or three orders of magnitude of savings in time and cost, and operationally it has given us the opportunity to let a core team of talented operations engineers manage common infrastructure and tooling to make our applications available at web scale.”<br><br>
|
||||
|
||||
|
@ -86,7 +86,7 @@ With the emphasis on common tooling, “We are getting to the place where we can
|
|||
<div class="fullcol">
|
||||
The team launched its first applications on Kubernetes in Sling TV’s two internal data centers in the early part of Q1 2018 and began to enable AWS as a data center option. The company plans to expand into other public clouds in the future.
|
||||
The first application that went into production is a web socket-based back-end notification service. “It allows back-end changes to trigger messages to our clients in the field without the polling,” says Linder. “We are talking about very high volumes of messages with this application. Without something like Kubernetes to be able to scale up and down, as well as just support that overall workload, that is pretty hard to do. I would almost be so bold as to say that most of these applications that we are building now would not have been possible without the cloud native patterns and the flexibility that Kubernetes enables.”<br><br>
|
||||
Linder oversees three teams working together on building the next-generation platform: a platform engineering team; an enterprise middleware services team; and a big data and analytics team. “We have really tried to bring everything together to be able to have a client application interact with a cloud native middleware layer. That middleware layer must run on a platform, consume platform services and then have logs and events monitored by an artificial agent to keep things running smoothly,” says Linder.
|
||||
Linder oversees three teams working together on building the next-generation platform: a platform engineering team; an enterprise middleware services team; and a big data and analytics team. “We have really tried to bring everything together to be able to have a client application interact with a cloud native middleware layer. That middleware layer must run on a platform, consume platform services and then have logs and events monitored by an artificial agent to keep things running smoothly,” says Linder.
|
||||
|
||||
|
||||
</div>
|
||||
|
@ -100,7 +100,7 @@ The first application that went into production is a web socket-based back-end n
|
|||
<div class="fullcol">
|
||||
Ultimately, this undertaking is about “trying to marry Kubernetes with AI to enable web scale that just works,” he adds. “We want the artificial agents and the big data platform using the actual logs and events coming out of the applications, Kubernetes, the infrastructure, backing services and changes to the environment to make decisions like, ‘Hey we need more capacity for this service so please add more nodes.’ From a platform perspective, if you are truly doing web scale stuff and you are not using AI and big data, in my opinion, you are going to implode under your own weight. It is not a question of if, it is when. If you are in a ‘millions of users’ sort of environment, that implosion is going to be catastrophic. We are on our way to this goal and have learned a lot along the way.”<br><br>
|
||||
For Sling TV, moving to cloud native has been exactly what they needed. “We have to be able to react to changes and hiccups in the matrix,” says Linder. “It is the foundation for our ability to deliver a high-quality service for our customers. Building intelligent platforms, tools and clients in the field consuming those services has got to be part of all of this. In my eyes that is a big part of what cloud native is all about. It is taking these distributed, potentially unreliable entities and enabling a robust customer experience they expect.”
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -7,7 +7,7 @@ cid: caseStudies
|
|||
css: /css/style_case_studies.css
|
||||
logo: ygrene_featured_logo.png
|
||||
featured: true
|
||||
weight: 2
|
||||
weight: 48
|
||||
quote: >
|
||||
We had to change some practices and code, and the way things were built, but we were able to get our main systems onto Kubernetes in a month or so, and then into production within two months. That’s very fast for a finance company.
|
||||
---
|
||||
|
@ -34,14 +34,14 @@ quote: >
|
|||
|
||||
<h2>Solution</h2>
|
||||
Moving from an Engine Yard platform and Amazon Elastic Beanstalk, the Ygrene team embraced cloud native technologies and practices: <a href="https://kubernetes.io/">Kubernetes</a> to help scale out vertically and distribute workloads, <a href="https://github.com/theupdateframework/notary">Notary</a> to put in build-time controls and get trust on the Docker images being used with third-party dependencies, and <a href="https://www.fluentd.org/">Fluentd</a> for "observing every part of our stack," all running on <a href="https://aws.amazon.com/ec2/spot/">Amazon EC2 Spot</a>.
|
||||
|
||||
|
||||
</div>
|
||||
|
||||
<div class="col2">
|
||||
|
||||
<h2>Impact</h2>
|
||||
Before, deployments typically took three to four hours, and two or three months’ worth of work would be deployed at low-traffic times every week or two weeks. Now, they take five minutes for Kubernetes, and an hour for the overall deploy with smoke testing. And "we’re able to deploy three or four times a week, with just one week’s or two days’ worth of work," Adams says. "We’re deploying during the work week, in the daytime and without any downtime. We had to ask for business approval to take the systems down, even in the middle of the night, because people could be doing loans. Now we can deploy, ship code, and migrate databases, all without taking the system down. The company gets new features without worrying that some business will be lost or delayed." Additionally, by using the kops project, Ygrene can now run its Kubernetes clusters with AWS EC2 Spot, at a tenth of the previous cost. These cloud native technologies have "changed the game for scalability, observability, and security—we’re adding new data sources that are very secure," says Adams. "Without Kubernetes, Notary, and Fluentd, we couldn’t tell our investors and team members that we knew what was going on."
|
||||
|
||||
Before, deployments typically took three to four hours, and two or three months’ worth of work would be deployed at low-traffic times every week or two weeks. Now, they take five minutes for Kubernetes, and an hour for the overall deploy with smoke testing. And "we’re able to deploy three or four times a week, with just one week’s or two days’ worth of work," Adams says. "We’re deploying during the work week, in the daytime and without any downtime. We had to ask for business approval to take the systems down, even in the middle of the night, because people could be doing loans. Now we can deploy, ship code, and migrate databases, all without taking the system down. The company gets new features without worrying that some business will be lost or delayed." Additionally, by using the kops project, Ygrene can now run its Kubernetes clusters with AWS EC2 Spot, at a tenth of the previous cost. These cloud native technologies have "changed the game for scalability, observability, and security—we’re adding new data sources that are very secure," says Adams. "Without Kubernetes, Notary, and Fluentd, we couldn’t tell our investors and team members that we knew what was going on."
|
||||
|
||||
|
||||
</div>
|
||||
|
||||
|
@ -57,7 +57,7 @@ quote: >
|
|||
<div class="fullcol">
|
||||
<h2>In less than a decade, <a href="https://ygrene.com/index.html" style="text-decoration:underline">Ygrene</a> has funded more than $1 billion in loans for renewable energy projects.</h2> A <a href="https://www.energy.gov/eere/slsc/property-assessed-clean-energy-programs">PACE</a> (Property Assessed Clean Energy) financing company, "We take the equity in a home or a commercial building, and use it to finance property improvements for anything that saves electricity, produces electricity, saves water, or reduces carbon emissions," says Development Manager Austin Adams. <br><br>
|
||||
In order to approve those loans, the company processes an enormous amount of underwriting data. "We have tons of different points that we have to validate about the property, about the company, or about the person," Adams says. "So we have lots of data sources that are being aggregated, and we also have lots of systems that need to churn on that data in real time." <br><br>
|
||||
By 2017, deployments and scalability had become pain points. The company was utilizing massive servers, and "we just reached the limit of being able to scale them vertically," he says. Migrating to AWS Elastic Beanstalk didn’t solve the problem: "The Scala services needed a lot of data from the main Ruby on Rails services and from different vendors, so they were asking for information from our Ruby services at a rate that those services couldn’t handle. We had lots of configuration misses with Elastic Beanstalk as well. It just came to a head, and we realized we had a really unstable system."
|
||||
By 2017, deployments and scalability had become pain points. The company was utilizing massive servers, and "we just reached the limit of being able to scale them vertically," he says. Migrating to AWS Elastic Beanstalk didn’t solve the problem: "The Scala services needed a lot of data from the main Ruby on Rails services and from different vendors, so they were asking for information from our Ruby services at a rate that those services couldn’t handle. We had lots of configuration misses with Elastic Beanstalk as well. It just came to a head, and we realized we had a really unstable system."
|
||||
|
||||
</div>
|
||||
</section>
|
||||
|
@ -68,7 +68,7 @@ By 2017, deployments and scalability had become pain points. The company was uti
|
|||
</div>
|
||||
<section class="section3">
|
||||
<div class="fullcol">
|
||||
|
||||
|
||||
Adams along with the rest of the team set out to find a solution that would be transformational, but "wouldn’t require us to make huge refactors to the code base," he says. And as a finance company, Ygrene needed security as much as scalability. They found the answer by embracing cloud native technologies: Kubernetes to help scale out vertically and distribute workloads, Notary to achieve reliable security at every level, and Fluentd for observability. "Kubernetes was where the community was going, and we wanted to be future proof," says Adams. <br><br>
|
||||
With Kubernetes, the team was able to quickly containerize the Ygrene application with Docker. "We had to change some practices and code, and the way things were built," Adams says, "but we were able to get our main systems onto Kubernetes in a month or so, and then into production within two months. That’s very fast for a finance company."<br><br>
|
||||
How? Cloud native has "changed the game for scalability, observability, and security—we’re adding new data sources that are very secure," says Adams. "Without Kubernetes, Notary, and Fluentd, we couldn’t tell our investors and team members that we knew what was going on." <br><br>
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1 @@
|
|||
{"Target":"css/styles.css","MediaType":"text/css","Data":{}}
|
Loading…
Reference in New Issue