Listen to the Podcast
Here are the topics we cover this week with Alex and Brandon:
What is CoreOS?
A minimal Linux operating system that is less than a year old and built to be fault-tolerant, distributed, and easy to scale.
Who should use CoreOS?
Sophisticated ops teams and advanced DevOps people.
How is CoreOS different than other minimal Linux distributions?
Built-in clustering support, few binaries, and no packaging system (no apt-get or yum), dependent on Docker containers for managing software and apps on the OS.
How is web app development on CoreOS different than setting up a traditional LAMP app?
Instead of installing dependencies (like Apache or Nginx) at the OS level, you containerize them first and install the Docker containers for your web app.
What is the largest known/tested CoreOS cluster?
Currently tested on 50+ VM clusters.
Can you explain etcd?
And how about systemd?
Systemd does two things for CoreOS: launch Docker containers and let you register those services into etcd.
What do you think about all the systemd controversy in the Linux community?
Linux has done things the same way for decades, inheriting SysV concepts from Unix. There are community members who are afraid of change, but the overall consensus is that systemd is an improvement and should be the new standard.
Can you explain fleet and how to manage multi-server CoreOS clusters?
Fleet also lives on CoreOS servers and uses the etcd backbone to manage distribution of jobs and tasks to the cluster.
Can you explain the innovative way CoreOS updates itself?
It is actually a very common way to update a lot of hardware and firmware like iPhones, Internet routers and even Google Chrome. But we are just applying the same concept to Linux. The idea is to have two partitions on the hard drive and automatically download updates to the operating system onto the standby partition and if for any reason the update fails, the other partition will still work.
Can you talk about what SkyDNS’s new etcd integration does to service registration?
SkyDNS now uses etcd as the data storage backend for their distributed DNS system. This allows you to build internal DNS resolution on a per-container basis in a distributed system like CoreOS very easily. It will really help with building large clustered apps on CoreOS.
How will CoreOS make money?
CoreOS is a platform for building platforms. The CoreOS system will always be open source, but there are plans to build enterprise tools that make CoreOS easier to use for large organizations in the future. In the mean time, CoreOS can provide professional services to help companies integrate CoreOS best practices into their application development process.
What are the biggest problems in real-life Docker adoption today for organizations?
Multi-machine Docker is a problem, but the Docker guys will solve that soon. The biggest problem besides that is that Docker doesn't solve the problems of Linux system administration. For example, the heartbleed bug affects Docker containers in the same way as it does a virtual machine. So you still need to build system administration and configuration management (Chef/Puppet) processes around Docker.
What do you think of Deis and Flynn?
CoreOS is a platform for building platforms and we love seeing the ecosystem growing around CoreOS with these projects.
And a few more great questions…
If you have any questions for the CoreOS team, feel free to leave comments in this post.
Lucas Carlson: Hello. Welcome. This is Lucas Carlson, Chief Innovation Officer for CenturyLink. Today, we have a special treat for you guys. We have the CEO and CTO of CoreOS. We have Alex Polvi, the CEO, and we have Brandon Philips, the CTO.
I'm really excited to have them on the show. I'm really excited to learn more about CoreOS, and pick their brains. Before we get into it, can you tell us about you guys and your background?
Alex Polvi: I was previously with Rackspace helping to [indecipherable 0:00:34] product development. I joined Rackspace at the acquisition of my previous startup, Cloudkick. We built cloud super monitoring management tools. Before that, I was working at Mozilla, a bunch of things related to Firefox and Firefox add-ons.
Brandon Philips: I had some time at Rackspace where I was working on monitoring products. Then I worked on a platform called [indecipherable 0:00:56] where we did [indecipherable 0:00:57] sort of a new [indecipherable 0:00:58]. Before that I was doing kernel infrastructure and doing kernel hacking at SuSE and SuSE Labs Group there.
Lucas: Very cool. Excellent. Well, smart people and great startups and great companies. You guys have made your dent already. It's exciting to see what you guys are doing with CoreOS. As you know, I've been following it very closely.
My audience has been following it very closely. For those, who aren't familiar with it, can you give us the pitch? What is CoreOS?
Alex: Sure. It's a Linux operating system. It takes a bunch of different components into consideration to help you build an infrastructure that is fault tolerant, distributed, and easy to scale. It involves Linux containers, it involves, how we do distributed systems and fault tolerance.
It involves lightweight Linux distribution, that's 40S, I mean to say, and all of these components wrapped up is the CoreOS story.
Lucas: So it's like a miniature Ubuntu, that's built for big deployments of large systems. Is that right?
Alex: Yeah, it's really inspired like what you would see inside of an infrastructure that would run like production, Google infrastructure, Lightweight OS, clustering, packaging inside containers. All these best practices, wrapped into Linux OS that you can go and boot wherever you can run Linux. You can run on cloud or you can run it on bare metal.
Lucas: Great. How long have you guys been around?
Alex: We created GitHub, our repository on February 28th, 2013. We put our first alpha out, I don't know the exact date, but in August of 2013. Then we've been shipping releases ever since last August.
Lucas: That's great. It's amazing to see how much work you guys have put into this in such a small amount of time. The first question I have is, who should use CoreOS? What kind of operating system is this built for? Is it great for a developer? Is it good for dev-ops? What kind of person is it built for?
Alex: It's definitely, on the spectrum of traditional operations and no developer and then hardcore software developer and no operations. And then, somewhere in the middle is that nebulous dev-ops gray zone. We are to the right of the dev-ops, towards the developer side, a little bit. I would say it's advanced dev-ops. It's a dev-ops person that would understand why a consensus algorithm is important.
As soon as you start throwing around algorithms, you hit another class of dev-ops knowledge around computer science and everything. It's really for sophisticated operations teams that aren't so sophisticated that they want to go build all this themselves but also want a truly proper, world-class piece of infrastructure.
Lucas: Got it. Makes sense. How is CoreOS different from other minimal Linux distros. Bluebox is another one, right?
Alex: Bluebox? I'm not familiar with it.
Lucas: What's different about CoreOS, for those that haven't tried it, aren't familiar with it, than other minimalistic ones that don't have anything in them?
Alex: Sure. Do you want to take that one?
Brandon: Yeah. A few things. The first is, instead of having a traditional package manager, we lean heavily on having containers. If you have an application of whatever sort, you package all of its dependencies together into a container. The reason that we did that is we want to redraw how a Linux distribution looks.
We want to be able to have this very small operating system that we can update and say, "This is where our contractual agreement is, this is where the piece of software that the distribution is responsible for, and this is how you ship your Python or whatever application requirements you have all together."
It's a little bit different in that we don't have that package manager, and then our updates are applied a little differently, an active-passive update system. While you're running CoreOS on this A partition, there's a B partition that's getting updates in the background.
So, we can atomically upgrade you from the previous version of CoreOS that you've had running to the next version and then roll back if problems happen with that update. Those are a couple of the ways that we're a little different, and then, of course, we have all the clustering stuff.
Lucas: Interesting. Instead of having something like apt-get, you depend on Docker and the Docker index to have all the packages, and so you can build your containers, which get distributed onto a CoreOS system. That's how you get software onto CoreOS. You don't actually go in and "apt-get install" or "apt-get upgrade." You actually, "Docker pull" to get your software onto your CoreOS system. Is that correct?
Alex: To answer your question shortly, yes, but software installation, when you really break it all down is, you go and download something, normally over HTTP, and then you go and run some scripts to spray some files across your file system, after doing signature validations and other things like that.
With CoreOS, we don't ship anything at the per-package level on there. There isn't the equivalent of apt-get or yum, where you can take an individual package and install it. But we do ship Docker, and Docker has a nice tool where it will Wget, or not Wget, but do an HTTP fetch of some files and put them on your file system for you.
We also ship SSH, and we ship Wget and Curl, the tools to roll it all yourself if you need to, but the most convenient way, by far, out of the box, is just to use Docker. The model that Docker has among your application and all of its dependencies, which is really a requirement of a Linux container, is the requirement of CoreOS.
We need you to bring all of your dependencies that are required to run your application with you. That's the big mental model shift, is your dependencies of your application are actually a dependency of your application. They are not something that the host provides for you.
That's the big difference. Yeah, Docker is definitely the way. You can very conveniently pull down an image, build your own images, and you're off to the races.
Lucas: Very cool. How does a Web-app developer, who's trying to use CoreOS, trying to get started, trying to deploy an application using CoreOS, how do you think differently than a traditional LAMP-style system that's all "apt-get install apache" and set up your system that way? How do you have to think differently to get started with CoreOS as a Web-app developer?
Alex: First, anything you can run on Linux, you can run on CoreOS. That's no problem. The big shift is mainly around containerizing the different components of your application, so building an image for every single component of your application that you need, which is operationally a good best practice, and Docker makes it really, really easy.
It's probably something you want to do regardless of CoreOS, but we enforce that as a constraint. You would go and write some Docker files, again, the easiest way to get started. Go write some Docker files that pull in your application and get you running, and then go from there.
Lucas: Great. It sounds like Web-app developers who haven't gotten used to Dockerizing their apps are going to have to start to in order to utilize some of this next-generation technology.
Lucas: You don't have to tell us if you can't, but what's the largest-known installation of CoreOS? Is this something that has been tested at scale? You said Google-scale earlier. Is this something that we can rely — that's a pretty big claim. Usually, when large systems grow to a certain size, have the kinks been worked out yet? When will they be worked out? Can you give us some sense of the maturity of the project?
Alex: Sure. Last August, we shipped our alpha. The way our updates work is our alphas are all release candidates for betas, and they're bit-for-bit identical. We ship an alpha, you might have just gotten the beta version, without even knowing it, and the only way we know it is after a certain amount of testing and we're ready to just mark it as beta.
Where we're at right now is we've shipped an alpha, and we also ship betas, as of about a month ago now. The next step is, well, all those betas are release candidates for stables. As soon as we get to the quality level and have worked out enough kinks and the mailing lists quiet down, we'll call one stable.
That point is really when we begin saying that the more major production deployments of this stuff, simply because we're advising users not to do it at all, but you still see the typical, "developers putting the stuff into production when they shouldn't even though it's unstable" type thing.
It doesn't mean we have Google or Twitter putting it into production, by any means, but we definitely have been very similar to the Docker community in that people are injecting it in little points in their infrastructure and getting ready to play with it. The team is really looking forward to getting that stable release out, when we can say to our users with a straight face, "It's OK to put this in production." We'll see what happens from there.
Lucas: Have you guys run a large cluster of etcd? Do you know if etcd crashes at a certain point? Does it work with a hundred nodes or a thousand nodes or 10,000 nodes? Are you that sure yet?
Brandon: In the latest release of etcd, we added an additional, essentially, node so that etcd can scale up better. Up until now, every etcd machine had been involved in consensus. Now with the Zero Core release that should be rolling out to alpha here soon, you can essentially have smart standbys that redirect requests to people, who are actually involved in the consensus.
Just spinning up VMs, we spun up around 50 VMs or something on AWS to test this out, and then we hit the limit on AWS for number of IPv4 addresses we could get. It should scale pretty well. This is a very simple scaling pattern for etcd.
Alex: We have an alpha out right now. It's out, 0.4…
Brandon: It should be rolling out here in the next day or so.
Alex: OK. Putting out 0.4 on our alpha channel, pretty much immediately, then from there, you'll see a little call to arms, "Who has the biggest Amazon account we can dabble with on this?" Just because we need to get our up, too, but it's actually quite difficult to spin up a thousand machines on Amazon. I don't know if you've tried, but you have to go through a lot of red tape to get that.
Lucas: I actually did try, because when I built AppFog, we had a hundred thousand applications deployed. We had to get many more than a thousand servers on Amazon running for that.
Lucas: As I understand, CoreOS as a system is built out of three parts. It's built with Docker as one major keystone, etcd is a second keystone, and Systend is a third keystone. We've talked about Docker, etcd we haven't talked about yet. I guess, can you tell our audience, what does etcd do? What's it meant for?
Brandon: Etcd is meant for coordinating a cluster. You see, in a lot of the research and white papers and in practical systems that people build, you essentially need this little piece of consensus in the cluster. You need a place, where you can store a configuration and store information about services running in the cluster.
The important thing is to be able to do this in a consistent way. When I say that we're master-electing some things, we're moving from the database master being at machine A and machine B. We actually need to do that atomically. We can't have half the cluster pointing here and half the cluster pointing there.
Etcd allows you to do this sort of registration and service-discovery stuff, and then also update the values in that key space in an atomic matter. It uses a consistency algorithm that's similar to Paxos, called Raft. It's a simplified version of Paxos.
Usually, you have a cluster of three to five machines to give you some ability to have outages. You can have, in a five-node cluster, two machines go away and still have the cluster making forward progress. Simply, it's a key value store that has the ability to do these consistent changes.
Lucas: Very cool. Then the third part is Systend. How does Systend fit into the CoreOS system?
Brandon: Systend is the init process of each individual CoreOS machine, and it takes a job definition, so some type of binary to run, whether it's a Docker run or Inspun or whatever.
It gets it on the machine, it launches it, and then it monitors the process's state. It makes sure the process continues running. It puts constraints on the process via Cgroups. It can put memory limits. It can restrict certain properties about what the process is able to do to the system.
The nice thing about Systend and the thing that we leverage is that, we combined Systend with etcd with a tool called Fleet. Since Systend is an API-driven init system, which is something we really haven't had before, based on changes within the cluster that is put into Fleet. We have individual machines start running jobs that were scheduled to the cluster, not to any individual machine.
Lucas: Very cool. When I've used CoreOS, the things that I typically do inside of Systend are tell the CoreOS which Docker containers I want to run and how I want to run them. It says, the Docker run command inside of the Systend configuration file, and it also sets the etcd variables. Are those the usual things that people do with the Systend stuff inside of CoreOS?
Brandon: Yeah, those are common things to do.
Lucas: Great. One thing, there's been a lot of talk within the community about the adoption of Systend within other Linux distributions, and that's been quite controversial. Where do you guys stand on that controversy? What do you think about Systend? Do you think it should be replacing some of the older process launchers? How do you see Systend moving forward?
Alex: I think, whenever there's change, people get upset. What's changing is one of the most old, arcane parts of UNIX and Linux. For the longest time, we've been trying to make Linux look like UNIX. I mean, the namesake of Linux is that.
Fundamentally, Linux is not like the UNIX that was created by the Bell Labs guys a while ago. It's its own whole thing, and it has all these features and capabilities that are completely different and not supported at all. Yet we don't take advantage of those, because people are just entrenched in their ways.
It goes back to the human condition on anything — change is hard, especially with server infrastructure, where state of the art is, "Get it running and never touch it again." I think it's just natural for people to complain about any major change in any component that people don't like to change. [laughs]
Also, being a sysadmin at heart, I'm extremely excited. I mean, it's hard to learn new things and to learn, how to do things differently, but it's also awesome. It's just so much more feature-rich. It's like, "Welcome to 2014. Our logs can output in JSON now." [laughs] We don't have to write a Perl script to parse a date and put it into an object in our scripting language anymore. Like, whoa. [laughs]
I think, it's really good and if you look at the technical chops of the people that are making these decisions, we've seen universally, all the distributions are switching to it.
Everybody agrees, if you just get people that don't like change and they're the vocal folks that like to talk about not making change. I think that would happen in any system, not just the computer system. [laughs] In anything, if you've had something around for a while and there's a big change, you're going to get people upset about it.
Lucas: That makes sense. Brandon, you just brought up the Fleet stuff. If you want to move from just one CoreOS image and you want to stitch a second one together and have them cluster, can you explain briefly how that's done?
Brandon: You can think of etcd as a substrate that's in the cluster for doing configuration. It's like /etc but it's cluster-wide. System D is able to run processes on a single machine. With Fleet what we do is we leverage etcd to put work into describing with a unit file, a system D unit file, some work that you want to get done in the cluster.
You hand it to Fleet and then via some scheduling algorithm that's in the Fleet daemon. It looks at all the machines available, tells one of the machines, "Hey, can you start this work." Then based on whether that machine continues to run, if the machine dies, it may reschedule that work, then you're able to get output and information about what's running, where across your cluster.
Fleet will keep track of whether the process is running on a machine, whether it died, the information that you'd want to get out so you can go and investigate and debug failures and that sort of thing.
Lucas: Got it. Let me see if I understand. To get two CoreOS to cluster together, they have to share an etcd entry, in the etcd registry so that the etcds are connected. Then once the etcds are connected, Fleet can then tell each system in the cluster, what job that it needs to run. Is that how it works?
Brandon: Right. You write a system D unit file, so service file just like you normally would, and instead of choosing one of the two machines to put the work onto, you say, "Start this job. Fleet, start this unit file," and then it decides one of the machines to run.
Lucas: That's very, very cool. You guys mentioned at the beginning that CoreOS updates itself on different partitions. Can you talk a little bit more about that? That's pretty unique. Is that something that you guys are developing? I haven't heard about this before.
Alex: That's actually not unique at all, but we don't see it on servers very often. It happens in hardware appliances, all the time. It happens on your Android phone or your Apple phone, and it happens in all the consumer-facing stuff a lot but not so much on Linux or on a server.
What we do on the server side is , essentially, the way updates work today, is you have this state that you're at when the operator logs into a machine or when your puppet is invoker or whatever, your configuration management is invoker.
Then it runs a set of scripts to get you to this other know state. Or the operator runs a set of commands to get you to this other known state. Then, if anything goes wrong along the way, you're now in a weird unknown place and you need to go fix that.
In order to manage anything at scale, you have to have consistency. Any little tiny piece of consistency will throw everything off, and I need a unit to go in there and resolve the issue.
The way that we do it is, it's called a double-buffered update where essentially we have one running root file system and it has an update agent in it that's checking for updates.
In the background, it goes and downloads and applies the update to the one that's not running, the passive partition.
When the time is right, either as determined by the user or determined if you're letting CoreOS take care of itself, the machine will bounce over to that new version and you're just running the latest thing.
Because we've architected everything such that your applications carry all of their dependencies with them, the only thing you have to worry about is, is Linux going to break? Is Linux itself going to break the interface between applications and the Kernel?
Linux has been adamant for a very long time that Linux never breaks user-land. That is a pretty narrow concern, but it happens sometimes. That's why we have all these different channels of testing and all of our internal labs and everything that we do before we call things stable and put them out there.
Yeah, this double buffered update is taking a concept that's been tried and true for areas where you want a lot of automation like appliances, consumer devices, things that you want to always work and always work consistently. We're applying it to servers so that you can consistently manage lots of servers without having to go into them and fix them up when the update goes bad.
Lucas: That's very, very cool. If you have a cluster of five machines, do they all restart at the same time? Or is it a random timing, how does that work?
Brandon: The machines are grabbing the update from the update service, but they don't all reboot at the same time. There's a tool built into CoreOS called Locksmith.
Locksmith essentially uses etcd to take a mutex out. You take a lock out that says, "I'm currently going to apply the update by rebooting, so nobody else reboot right now. So, we don't lose the entire cluster at once."
This is a mechanism that you can use and various tools to manipulate it or to check on the status of the locks and that sort of thing.
Lucas: Yeah, that makes a lot of sense. Using that distributed key value store in different ways with the Fleet stuff and the Locksmith, that's a very smart way to do it.
I've recently been noticing some of the external contributions around the CoreOS project, especially I saw the recent skydns stuff, which is really, really cool.
They now are kind of hopping on the etcd bandwagon and have etcd integration for service registration that supports DNS. What do you guys think about that, and what do you think about the ecosystem around CoreOS?
When I read that, it made a lot of sense because as you need to launch more stuff within your CoreOS one of the big problems is you get multiservers, how do those different containers know about each other? If you have that distributed DNS system based on etcd, that's a great way for those services to just use simple DNS to look up each other. How do you guys think about that?
Alex: We've always considered ourselves a platform for building platforms. As we design features and build out our products, sometimes it's a little hard to use, but it's often full-featured in that people can do whatever they want with it.
That's what people need when you're building higher-order services. We give you access to all the features and everything within etcd. We'll add some features to make it easier to use, but at the end of the day, you can get into the nitty-gritty of all that distributed stuff and using it however you want, which is really powerful for people that want to build higher-order platforms.
Our task currently is to make sure that really low-level stuff that we're building is really solid and good. As we get that lower-level platform and get everybody's trust, that we're here and we're something that you can build your own platforms on top of, it will continue to make it easier and easier for you to do that.
It's great. I think it's an example of…etcd was released last summer, so less than a year ago. To see projects being built on it, regardless of if they're running it on CoreOS or not, is awesome.
We built it only because something like it didn't exist. We believe a lot in using the best tool for the job and really staying product-focused, not staying technology focused.
Whatever tools help people put out these products that are highly available and help people run their infrastructure better. And we'll use the best tool for the job and incorporate it as needed, but we'll build the stuff that doesn't exist too. Glad to see we struck a chord with etcd and Fleet, because they both seem to be picking up pretty quickly.
Lucas: Awesome. Can you run etcd outside of CoreOS? Can I run it on my Mac as a daemon or on Ubuntu?
Brandon: Yeah. It's a Go binary, you can run it wherever. We've even ported it to Windows.
Lucas: Cool. Alex, I've got a question for you. You don't have to tell our audience, but how does CoreOS plan to make money, because this is all open source. How do you guys plan to monetize?
Alex: Yeah, we have a lot of things going on there, to be announced. The high level is essentially, we don't intend to commercialize CoreOS itself. We intend to commercialize products built on top of it.
Just like we said, it's a platform for platform builders. We'll build some tools on top of CoreOS that companies can optionally buy from us. Our design philosophy and everything is very around, "Always speak open protocols but then do a really good implementation of an open protocol."
If people want to buy it from us so that they can use our thing, they can, but if they want to go implement it themselves because it's speaking an open protocol, that's fine. That's sort of TBD, but we've been working with paid folks for a while now and I think there's good traction. We haven't shipped a commercial product yet, much less a stable product yet.
Lucas: If I was a big company and I wanted to implement CoreOS for my next upcoming project, I could go to you guys and you could help me on a project basis?
Alex: Sure, definitely.
Lucas: Great. I'm sure a lot of our viewers are going to be interested in that. On a slightly tangential topic, I'm curious — what do you guys think the biggest problems for a real life Dockered option are today for organizations?
Alex: One is multimachine. The Docker guys have to be working on this but right now Docker is a very single-host focused thing.
Sorry, oh, here it is. There was a phone ringing.
Docker is a great single host. Again, Docker has to be getting to this. The Docker guys, they have to be getting to it at some point, make that cleaner. I think that's a temporary concern. Probably the biggest issue is all of the problems around operations still exist with Docker.
You don't actually eliminate any problems, but they move around. In a traditional environment, like let's say there's Heartbleed, the Heartbleed issue comes out. That has to be dealt with, with or without Docker. That is not a Docker, doesn't fix that for you.
What happens, if everything is inside Docker containers, now that OpenSSL bug is distributed amongst a bunch of containers and you have to think about, how do we manage that?
That's been traditionally a job of the operations team, but Docker has moved it to be more of a developer problem, not in the operations team's problem. I think that'll turn a lot of things on its head.
I do think that's actually for the best. At the end of the day, if you're putting OpenSSL in your application, your application is responsible for the security of OpenSSL as well as its own security.
I think that there will have to be some compromises in terms of how that's managed, because very few folks are OpenSSL experts and aren't going to be able to track that. It needs to be probably be tracked, essentially, but again there are workarounds. For instance, you could build base images and [indecipherable 0:30:50] developers start from there.
That shift of responsibilities for larger organizations could be a big question mark about how that works going forward.
Lucas: All right. Cool. One last question, last week we talked to Gabriel Monroy that built Deis, and Deis is now running on CoreOS, so they're one of the platforms that you've enabled. What do you think about the Deis project and the Flynn project and platform as a service on top of Docker and CoreOS? What do you think about that?
Alex: Again, platform for building platform is awesome. Blake Mizerany, who was the first employee at Roku and the main engineer there in the early day, he joined the CoreOS team. He was man I just want to help other people build their stuff and get it all right. We spent all our time on the problem that CoreOS is working on, so let's go and build the ideal solution for everybody and solve this once and for all.
Seeing the Deis guys sign up was pretty awesome because it's a perfect validation. Now, the guy who built Roku is working on the other platform of the open-source of Roku. That shows that our approach to this is on track and that we're solving some problems for folks in the area that we expected, which is always good for an early stage company that we start out with it's just a bunch of crazy ideas or we could botch any of this.
We're finding that people need this. In fact, they really need it. [laughs]
Lucas: Yes, they do. Just to give you a little hint, CenturyLink is also building a platform on your guys' platform with CoreOS and it's a project that we don't talk about too much yet. We're very excited, very grateful for your work, and grateful for your time today to explain to our audience a little bit more about CoreOS and hope to have you back as things progress. Love to hear, as CoreOS gets closer to production and we hear about some of the big entrances out there.
I'd love to get an update, so thank you both for your time.
Alex: All right. Yeah. Thank you so much.
8: : http://www.centurylinklabs.com/podcast/rss.xml
Did you like what you read?
Stay on the bleeding edge with the weekly CenturyLink Labs newsletter featuring Docker tutorials, news, jobs, and much more.