

- #RED HAT ENTERPRISE LINUX COREOS UPDATE#
- #RED HAT ENTERPRISE LINUX COREOS SOFTWARE#
- #RED HAT ENTERPRISE LINUX COREOS PLUS#
- #RED HAT ENTERPRISE LINUX COREOS FREE#
So upstreaming in Go land means writing a fix + upstreaming it in its original project + upstreaming the update to a new commit hash in every other Go project that uses this original project.
#RED HAT ENTERPRISE LINUX COREOS SOFTWARE#
They may (and will) use private copies of component A without the fix ad vitam eternam (years in software land). The end result is that it is not possible to pay someone to write and upstream a the fix for a bug in component A, and expect the fix to be picked up and present in component A users.

mostly identifies every bit it depends on by commit hash. Go does not use dynamic libraries (so every project needs to manage a private copy of the whole Go software universe it depends on) andĢ. That’s pretty much forced on you by the wayġ.
#RED HAT ENTERPRISE LINUX COREOS FREE#
Hopefully this small novel answered your question, but if you'd like even more detail than this, feel free to take this offline: don’t like much forks in general but right now I don’t see how anyone could pretend supporting kubernetes or docker in general without forking them to death.

We ended up being the first people with EL7 tectonic kubelets in pre-prod as far as Alex Polvi told me. It fit better in with our model and we went with them as they wanted to work with us. Tectonic as a k8s distro is almost entirely OSS sans tectonic-console, some of the authentication bits that wrap prometheus + alertmanager with dex and perhaps some of the update bits. Going vanilla k8s gave us more options for both support and general open source community, so we started looking into that. Also, the openshift pricing (even for large existing RH customers) is silly, but that isn't your fault. Ultimately, we decided we wanted to stay closer to the open source kubernetes where we could directly send patches upstream ourselves and then deploy it to our clusters, and so we wanted to start with a commercial "closer to oss version" of k8s as a known good platform. We work pretty close with Redhat already (high touch beta, we employ upstream kernel devs who work with your kernel devs and hw vendors, etc), but just got rubbed a bit raw by the entire interaction with everyone who wasn't an engineer on the openshift side of things. The only part I strongly disliked was the many thousands of lines of ansible playbooks which took ~25 minutes to complete to build a bare metal HA cluster (the router with ha mode had some limitations related to the vrrp bits if memory serves). We had a long call with your most amazing Jeremy Eder and I believe Dan W at one point going over some technical concerns, but overall, it is/was solid tech. Many of the patches didn't seem like they'd ever be fed back upstream, hence my opinion of it being a fork.

I found the origin spec file quite odd as it pulled down the kubernetes source in addition to its own and then put them together with many patches.
#RED HAT ENTERPRISE LINUX COREOS PLUS#
I found the same with the openshift^Wprojectatomic version of docker, which is directly a fork (it caused a row between redhat and docker inc plus Dan's amusing #nobigfatdaemons twitter hashtag. We were told it would require a big rebase and would take at least one or two more point releases of openshift to get fixed. There was a specific bug we encountered in openshift which was fixed in a point release of upstream k8s. It changed default ports, closed some ports, required tls for everything, removed federation support from the hyperkube build, added the scc stuff, removed some functionality, added rbac bits (that eventually made it into k8s, so thanks!), etc. The origin spec file had a specific git commit for a kubernetes release candidate, not a stable version, and a huge pile of patches ontop of it. Note that this is fairly dated as I looked at openshift right after 3.2 (enterprise) was released.
