|
| tremon wrote:
| I understand where the name is coming from, but Microshift is
| treading dangerously close to certain other company's name and
| trademarks. What are the chances Microsoft will see this as
| trademark infringement?
| ww520 wrote:
| Whatever happened to the Michaelsoft case?
| m3adow wrote:
| Will this be availabe as Open Source version (MicroOKD?) as well?
| I don't see it mentioned in the blog post.
|
| I'd love to see how the resource consumption compares to k3s.
| freedomben wrote:
| I'm sure it will. It's more a question "when" than "if."
| cpitman wrote:
| This is an open source project from the start (as most things
| red hat creates), you can find the coffee here:
| https://github.com/redhat-et/microshift
| acbdgx wrote:
| candiddevmike wrote:
| It's getting harder and harder to find the kubernetes part of
| openshift--wtf is openshift-dns? I'm not sold that this is better
| than k3s, and I think red hat isn't capable of balancing the
| needs of both a sprawling borg-like platform that assimilates all
| kube functions vs a lean kube experience.
| BossingAround wrote:
| > wtf is openshift-dns
|
| I think one thing I'm definitely missing from the OpenShift
| docs [1] is reasoning. What does it add? Why do I want to learn
| to use an operator instead? Otherwise, it's pretty clear that
| it's just an operator on top of CoreDNS.
|
| I do think that the docs are utterly devoid of Kubernetes
| content. I think historically, RH tried to differentiate
| themselves from K8s. Now, it can definitely hurt the knowledge
| migration and transfer.
|
| [1] https://docs.openshift.com/container-
| platform/4.8/networking...
| freedomben wrote:
| I can't speak for Red Hat but I have a lot of experience with
| OpenShift.
|
| Basically all of these operators are there to allow
| customization. The philosophy behind OpenShift is to move
| everything possible to operators so they can be managed in a
| cloud native way. This has a bunch of benefits like being
| fully declarative, and able to keep your whole config in
| version control.
| nullify88 wrote:
| Openshift-dns is just their custom operator which deploys and
| configures CoreDNS. OpenShift is k8s with alot of RH operators
| bundled in to configure stuff like monitoring, logging,
| networking, ingress, etc.
| asim wrote:
| Is this anything like the canonical distribution called microk8s?
| https://microk8s.io/
|
| On the naming front, why is everyone calling it MicroX. There can
| be only one Micro. https://micro.dev
| BossingAround wrote:
| > why is everyone calling it MicroX
|
| Probably because "micro" means "small" in Greek, and the whole
| Kubernetes uses Greek.
|
| > There can be only one Micro. https://micro.dev
|
| Call me grumpy, but I spent 5 minutes on the site and still
| have no idea what "micro.dev" is other than "a platform for
| cloud-native development" (yeah, what isn't nowadays?)
| kristianpaul wrote:
| I keep saying Microchip instead
| seeekr wrote:
| Red Hat's answer to (now SuSe's) k3s?
| bbarn wrote:
| Probably should have done some digging before trying to coin a
| new name. Microshift is a huge chinese company making bicycle
| parts for over a decade.
| tomcam wrote:
| blinkingled wrote:
| I don't see any reason to bring the IoT buzzword into this - but
| I am not a marketing guy. Standard OpenShift (Ok, I will try to
| refrain from profanity) is extremely resource intensive even for
| most fat enterprise environments. They could have just one
| product based off of what they are calling MicroShift - less
| resource usage, modular to the core so customers can add
| precisely what they need - some will use the IoT profile, some
| will use minimal-enterprise and so on. Right now they just try to
| smother you with lot of baggage and burden and their solution to
| everything is run a managed one in AWS - i.e. dictate the
| choices.
|
| I just never liked the idea of taking something like open source
| k8s and creating a Redhat specific version that requires
| different treatment and whole lot of other enterprise stuff
| including RHEL. And it doesn't work all that better than GKE or
| EKS or even building your own cluster (I have done all 3.)
|
| They should have just created tooling around standard K8s and
| allowed customers to use the good parts - deployment workflows,
| s2i etc. basically plugging the gaps on top of standard k8s. I
| can totally see lot of customers seeing value in that.
| hosteur wrote:
| That would be a much better product for the users. But not for
| the business. Less vendor lock-in.
| aspenmayer wrote:
| Better still for the business to make a product that people
| use!
| nullify88 wrote:
| I'm running both OpenShift and k3s in production, and there
| isnt that much that requires different treatment between the
| two. There are some specific OpenShift APIs (like routes which
| are terrible) and some quality of life improvements (service-ca
| signer) but nothing drastic.
| m3adow wrote:
| > like routes which are terrible
|
| Huh, interesting. What do you not like about routes? My team
| is providing an IaaS solution for internal developers in my
| company and a lot developers seemes to have less problems
| with Openshifts service exposition abstraction (Routes) in
| contrast to pure Kubernetes.
| freedomben wrote:
| Routes are of the biggest things I miss when I'm on vanilla
| K8s. I don't see how anybody could prefer Ingress to
| Routes, but to each their own.
| candiddevmike wrote:
| Routes suck because they're basically the sameish API as
| ingress but now your developers have to maintain two kinds
| of manifests/helm templates depending on if they're
| targeting openshift or kubernetes.
| nullify88 wrote:
| You can create Ingress resources on OpenShift and it will
| automatically create routes for you. You can customise
| the generated Route by using annotations on the Ingress
| resource.
|
| This has worked well for us because not all helm charts
| are OpenShift friendly but they usually do allow
| customising the ingress resource with annotations or we
| patch it in via Kustomize.
|
| https://docs.openshift.com/container-
| platform/4.8/networking...
| nullify88 wrote:
| A big inconvenience is that for HTTP2 Routes or edge / re-
| encrypt Routes with a custom TLS certificate, the TLS
| certificate and keys must be inline in the Route resource
| instead of referencing a secret like Ingress resources do.
| I think this is a big oversight where Routes mix secrets
| and Ingress configuration together.
|
| It makes GitOps annoying because I don't want to treat the
| whole Route resource as a secret that needs to be encrypted
| or stored in vault.
|
| Do I also then treat Route resources as sensitive and deny
| some users access on the account they could contain private
| keys?
|
| I also have to worry about keeping the route updated before
| certificates expire instead of having it taken care of by
| cert-manager.
|
| So we use Traefik's IngressRoute.
| m3adow wrote:
| That I can relate to. For us, there's only one dev team
| which uses HTTP2 (financial industry, so HTTP2 is still
| seen as "new = beta") and encountered that problem. I
| have no idea how they solved it though.
| nullify88 wrote:
| FWIW, although I've known for a while that OpenShift
| coverts ingress resources to routes, I just found out
| that the Ingress Controller sets up a watch on the
| secretref which keeps the inline TLS in the Route in
| sync. That could be enough for some people.
| blinkingled wrote:
| Have you tried running istio for example in an enterprise env
| - they needed you to install a RH specific older version and
| IIRC that wasn't just for support. I could list some more
| things if I recalled hard enough.
| freedomben wrote:
| > _I don 't see any reason to bring the IoT buzzword into this_
|
| IoT is certainly a buzzword, but it also does have real
| meaning, and this product is aimed squarely at the IoT edge
| devices themselves. Seems quite appropriate to use the term IoT
| to describe it.
| erulabs wrote:
| It' so excellent to see more "tiny" distros for Kubernetes. The
| resource requirements of the control plane has plummeted lately,
| and it'll be exciting to see more IoT devices running the full
| k8s api.
|
| At any rate, we have a couple customers using microshift at
| https://kubesail.com and it works like a charm for home-hosting!
| Might have to add docs for it soon!
| MrStonedOne wrote:
| windexh8er wrote:
| I'm curious what you see the most widely used as alternative
| distros? Also Kubesail is awesome, look to be leveraging it
| more this year.
| erulabs wrote:
| It used to be microk8s by a long shot, but starting about 6
| to 12 months ago k3s has leaped head in terms of simplicity
| and memory/cpu usage. k3s runs great now on a system with as
| little as 1 or 2gb or RAM.
| jamesy0ung wrote:
| This vs k3s?
| latchkey wrote:
| > Functionally, MicroShift repackages OpenShift core components*
| into a single binary that weighs in at a relatively tiny 160MB
| executable (without any compression/optimization).
|
| Sorry, 160meg isn't 'relatively tiny'.
|
| I have many thousands of machines running in multiple datacenters
| and even getting a ~4mb binary distributed onto them without
| saturating the network (100mbit) and slowing everything else
| down, is a bit of a challenge.
|
| Edit: It was just over 4mb using .gz and I recently changed to
| use xz, which decreased the size by about 1mb and I was excited
| about that given the scale that I operate at.
| dralley wrote:
| Relative to Kubernetes (well, specifically OpenShift, which is
| Kubernetes packaged with a lot of commonly-needed extra
| functionality / tools)
|
| A "hello world" written in Go is 2mb to begin with so ~4mb is a
| bit unrealistic for any substantial piece of software written
| in Go. Although if that colors your opinion of Go itself,
| you're certainly allowed to have that opinion :)
| latchkey wrote:
| Your logic doesn't work because adding functionality doesn't
| necessary translate to a huge increase in binary size.
|
| The complex app (15k lines of code) that is distributed to
| these machines, written in Go, is just over 3mb compressed
| with xz.
|
| Plus this is still many orders of magnitude off with trying
| to get 160mb off to mobile devices per the image in the
| article. It is a non-starter.
| dark-star wrote:
| They reported the 160mb as uncompressed (and probably
| unstripped?) size. If you compress that it will be aroudn
| 50mb, still more than your 3mb xz example but at least
| you're now comparing apples to apples
| The_Colonel wrote:
| > The complex app (15k lines of code)
|
| I would argue that 15K lines of code is far from a complex
| app as is normally understood. It's a rather small app,
| especially in Go which isn't exactly the most expressive
| language out there.
| [deleted]
| dralley wrote:
| I'm going to take a guess that even a "mini" distribution
| of Kubernetes is more than 15k lines of code though. k3s is
| quite a bit smaller than this but it's still only described
| as "<50 megabytes"
| jrockway wrote:
| I recently wrote a single-process k8s distribution (for
| tests). Compiled into a linux/amd64 binary with no
| special settings it's 183M with go1.18beta1.
|
| Kubernetes is a lot of code, plain and simple.
| hrez wrote:
| I found go compiler isn't very efficient in not linking
| in the dead code. I ran into it by switching from aws-
| sdk-go to aws-sdk-go-v2 and the binary jumped from 27Mb
| to 66Mb [1]
|
| Granted fully featured app will likely use all of the
| module's code so it's not a factor.
|
| [1] https://github.com/aws/aws-sdk-go-v2/discussions/1532
| jrockway wrote:
| Yeah, I have a feeling that these giant autogenerated
| clients are tough to optimize. Too many layers of
| indirection; even the compiler is confused and just
| crosses its fingers and hopes it doesn't have to
| investigate too much.
| latchkey wrote:
| I'm not sure the point of your argument. Somehow we
| should just accept 160meg downloads to a phone, because
| why?
| dralley wrote:
| I don't understand the point of your argument, because
| nobody (including the announcement above) is talking
| about phones.
|
| From the post:
|
| > Imagine one of these small devices located in a vehicle
| to run AI algorithms for autonomous driving, or being
| able to monitor oil and gas plants in remote locations to
| perform predictive maintenance, or running workloads in a
| satellite as you would do in your own laptop. Now
| contrast this to centralized, highly controlled data
| centers where power and network conditions are usually
| very stable thanks to high available infrastructure --
| this is one of the key differences that define edge
| environments.
|
| > Field-deployed devices are often Single Board Computers
| (SBCs) chosen based on performance-to-energy/cost ratio,
| usually with lower-end memory and CPU options. These
| devices are centrally imaged by the manufacturer or the
| end user's central IT before getting shipped to remote
| sites such as roof-top cabinets housing 5G antennas,
| manufacturing plants, etc.
|
| > At the remote site, a technician will screw the device
| to the wall, plug the power and the network cables and
| the work is done. Provisioning of these devices is "plug
| & go" with no console, keyboard or qualified personnel.
| In addition, these systems lack out-of-band management
| controllers so the provisioning model totally differs
| from those that we use with regular full-size servers.
|
| I don't read this and think "phones". This sounds like
| it's targeted at embedded industrial / telecom devices.
| (At least based on the examples they chose, I'm sure you
| could use it for other things).
|
| The word "mobile" doesn't actually appear anywhere on the
| page so I'm not sure where you got "mobile devices" from.
| marktangotango wrote:
| > without saturating the network (100mbit)
|
| Datacenter to datacenter and only 100Mb? Clearly more to the
| story here.... :)
| latchkey wrote:
| Within the data center.
| dralley wrote:
| That's even worse though.
| latchkey wrote:
| The machines don't require a lot of bandwidth.
| blibble wrote:
| my fridge has gigabit networking
| capableweb wrote:
| > I have many thousands of machines running in multiple
| datacenters and even getting a ~4mb binary distributed onto
| them without saturating the network (100mbit) and slowing
| everything else down, is a bit of a challenge.
|
| Maybe I suggest CAS (Content-addressable storage) or something
| similar for distributing it instead? I've had good success
| using torrents for distributing large binaries to a large fleet
| of servers (that were also in close proximity to each other
| physically, in clusters with some more distance between them)
| relatively easily.
| latchkey wrote:
| Thanks for the suggestion, but as weird as this sounds, we
| also don't have central servers to use at the data centers
| for this.
|
| The machines don't all need to be running the same version of
| the binary at the same time, so I took a simpler approach,
| which is that each machine checks for updates on a random
| schedule over a configureable amount of time. This
| distributes the load evenly and everything becomes eventually
| consistent. After about an hour everything is updated without
| issues.
|
| I use CloudFlare workers to cache at the edge. On a push to
| master, the binary is built in Github CI and a release is
| made after all the tests pass. There is a simple json file
| where I can define release channels for a specific version
| for a specific CIDR (also on a release ci/cd as well, so I
| can validate the json with tests). I can upgrade/downgrade
| and test on subsets of machines.
|
| The machines on their random schedule hit the CF worker which
| checks the cached json file and either returns a 304 or the
| binary to install depending on the parameters passed in on
| the query string (current version, ip address, etc.). Binary
| is downloaded, installed and then it quits and systemd
| restarts the new version.
|
| Works great.
| capableweb wrote:
| > Thanks for the suggestion, but as weird as this sounds,
| we also don't have central servers to use at the data
| centers for this.
|
| Same here, hence the suggestion of using P2P software
| (BitTorrent) for letting clients fetching the data from
| each other (together with initial entrypoint for the
| deployment, obviously), and you'll avoid the congestion
| issue as clients will fetch the data from whatever node is
| nearest, and configured properly will only fetch data
| outside the internal network once, after that it's all
| local data transfer (within the same data center).
| latchkey wrote:
| The machines are all on separate vlans. Only a few of
| them can talk to each other at a time. There isn't a huge
| amount of benefit there.
| westurner wrote:
| Notes re: _distributed_ temporal Data Locality, package
| mirroring, and CAS such as IPFS: "Draft PEP: PyPI cost
| solutions: CI, mirrors, containers, and caching to scale"
| (2020) https://groups.google.com/g/pypa-dev/c/Pdnoi8UeFZ8
| https://discuss.python.org/t/draft-pep-pypi-cost-
| solutions-c...
|
| apt-transport-ipfs: https://github.com/JaquerEspeis/apt-
| transport-ipfs
|
| Gx: https://github.com/whyrusleeping/gx
|
| IPFS:
| https://wiki.archlinux.org/title/InterPlanetary_File_System
| aspenmayer wrote:
| I'd seriously look into BitTorrent for this use case, as it
| sounds ideal. You can even configure your client to run a
| script after torrents are completed, so you could script
| the update migration with minimal code. You can also setup
| upload/download limits easily.
|
| I think Resilio Sync might also be a good option; I think
| it may even use BitTorrent internally. (It was formerly
| known as BitTorrent Sync, not sure why they changed the
| name.)
| latchkey wrote:
| Over engineering. You're trying to solve a problem that I
| don't have. What I have now works great with a page of
| well tested code and doesn't have the complexity of BT.
|
| I'll duplicate the comment above that the machines are
| all on separate vlans and don't really talk to many other
| machines.
| freedomben wrote:
| "relatively" is the operative word there. Compared to
| regular/full openshift, it _is_ tiny. I would imagine they
| chose the word "relative" because in absolute terms nobody
| would call 160mb tiny.
| hosteur wrote:
| Sounds like a use case for BitTorrent.
| latchkey wrote:
| Separate vlans
| hericium wrote:
| > I have many thousands of machines running in multiple
| datacenters and even getting a ~4mb binary distributed onto
| them without saturating the network (100mbit) and slowing
| everything else down, is a bit of a challenge.
|
| You may find murder[1] / Herd[2] / Horde[3] -type tool of some
| use.
|
| [1] https://github.com/lg/murder
|
| [2] https://github.com/russss/Herd
|
| [3] https://github.com/naterh/Horde
| eyegor wrote:
| If you're really that sensitive to size, may want to try 7z. I
| can usually get a few % smaller archive sizes than xz with
| faster decompression to boot. Of course then you might need to
| install a 7z lib which could be an issue.
| speedgoose wrote:
| Using advanced devops tools for IoT is always interesting. I have
| seen a few demos that were variations of the following comic :
| https://www.commitstrip.com/en/2016/05/26/the-internet-of-th...
|
| One demo using Microsoft Azure IoT hub, docker, digital twins (a
| glorified json from memory), and a raspberry pi was fun because
| it took minutes to deploy something to make a light blink.
___________________________________________________________________
(page generated 2022-01-22 23:00 UTC) |