[HN Gopher] Reclaiming the lost art of Linux server administration
___________________________________________________________________
 
Reclaiming the lost art of Linux server administration
 
Author : prea
Score  : 268 points
Date   : 2022-01-28 18:09 UTC (4 hours ago)
 
web link (www.pietrorea.com)
w3m dump (www.pietrorea.com)
 
| Eduard wrote:
| wrt
| https://gist.github.com/pietrorea/9081e2810c20337c6ea85350a3... :
| 
| Don't use "here documents" or "here strings" for passwords. Even
| in bash versions as recent as 2020, they create a temporary file
| within `/tmp/` with the secret inside. If the timing is unlucky,
| it will get written to disk and therefore leave permanent traces
| even after reboot. Only shredding will securely delete the data.
 
| variant wrote:
| Not quite the angle the author was getting at, but have noticed
| at $dayjob that staff who are able to do some incredibly complex
| automation against Linux-based stacks, containers, etc. - get
| quite lost when something low level isn't working right. Gaps in
| understanding of OS level troubleshooting and concepts gets them
| stuck.
| 
| You're wise to keep staff around who understand the low level
| stuff, in addition to the shiny new abstraction based tools.
 
| guilhas wrote:
| I do run my home server, but it is definitely an investment, and
| for some use cases it might not be worth it
| 
| The hardware is the cheapest part, then you have to pay
| electricity, manage backups, fix raid problems, have a good
| internet. Pay attention to how the server is doing. And if you're
| serving a business, you have to be available debug any issue.
| Investing a lot of time you could be actually working on the
| project
| 
| But definitely most devs should have a small home server for
| trying unimportant things. Nothing complicated, just keep the
| standard hardware config. There are second hand servers available
| for 50$. Install some Linux and have it running 24/7. Quite fun
| experimenting and hosting simple things
 
| tester756 wrote:
| How about security?
| 
| Any good resources / practices on making your server safe? and
| maybe not those kernel level tricks
| 
| also automated deployment
| 
| so I can commit and it'll be deployed on the server
| 
| I thought about using GitHub Actions so when I push, then the
| server receives HTTP Ping and clones repo and setups the app
 
| jimmaswell wrote:
| From my experience I would never recommend giving up control of
| your servers to some third party. Weeks wasted waiting for
| useless support teams to get back to you on something you could
| have fixed in 10 minutes if you had root. Opaque configuration
| issues you can't debug without said useless support team. Needing
| permission and approval for every little thing on prod. If I was
| ever at a high level in a company I'd never go farther than some
| AWS load balancing or whatever on cloud instances we still have
| root on.
 
  | mark242 wrote:
  | > If I was ever at a high level in a company I'd never go
  | farther than some AWS load balancing or whatever on cloud
  | instances we still have root on.
  | 
  | Your competitors would salivate at this statement, fyi. Speed
  | is a competitive advantage. AWS is not "let's rent a big ball
  | of EC2 servers and call it a day", and anyone who treats it
  | like that is going to get eaten alive. If you have not looked
  | at -- for example -- Dynamo, you should. If you have not looked
  | at SQS, you should. The ability to have predictable, scalable
  | services for your engineers to use and deploy against is like
  | dumping kerosene onto a fire, it unlocks abilities and velocity
  | that more traditional software dev shops just can't compete
  | against.
 
    | Hackbraten wrote:
    | Our customer (300k+ employees) switched to AWS a couple of
    | years ago and I simply hate it so much. The "predictable,
    | scalable" service we're using, MSK, is a pain to develop
    | against. Log files are all over the place, thanks to
    | microservices and because no one, myself included, has a clue
    | on how to make them manageable again. AWS's graphical user
    | interface is a constantly changing mess. I hate clicking my
    | way through a GUI just so I can download individual log files
    | manually.
    | 
    | I wonder how you folks manage to work with AWS and not hate
    | it.
 
  | jjav wrote:
  | Yes, the absolute opaqueness of so-called serverless is a huge
  | hit to productivity.
  | 
  | Numerous times there's something weird going on and you're
  | stuck trying to guess and retry based on largely useless logs
  | until it somehow works better but you never really know what
  | the root cause truly was.
  | 
  | Meanwhile on my own server I'll ssh in and have complete
  | visibility, I can trace and dump network traffic, bpftrace
  | userspace and kernel code, attach debuggers, there's no limit
  | to visibility.
  | 
  | Yes lambda/serverless saves you a day or three in initial setup
  | but you'll pay that time back with 10-100x interest as soon as
  | you need to debug anything.
 
| joshstrange wrote:
| I don't know, abstraction is the name of the game and it makes my
| job 1000x easier. I have multiple servers running in my house
| that host everything from Plex to small little apps I've written,
| it all runs in containers and I couldn't be happier. Is being
| able to setup a Wordpress site with a script really something we
| should strive for?
| 
| I've always been a fan on "standing on the shoulders of giants"
| and it's served me very well to have this mindset. I'm fine to
| dive deep when I have to but diving deep just to dive deep....
| not so much.
| 
| Semi-recently I had need of a simple blog for a friends/family
| thing, I spun up a wordpress and mysql container and was done.
| Over a decade ago I used to setup and manage wordpress installs
| but it's not a skill I need.
| 
| I find this article a little odd since they talk about server
| admin but then also scripting setup script for your server which
| is more in the "cattle" category for me and less in the "pet"
| that I would consider "server administration".
 
| mattbillenstein wrote:
| Also, this is the real "multi-cloud" people tend to ignore. All
| the clouds can run all the popular Linux distros, so if your
| target is one of those, you can run your app anywhere without a
| lot of hassle.
 
| skilled wrote:
| I have been running all my sites on a VPS exclusively since about
| 2004. I might not be a server architect but I like the idea of
| managing the server myself.
 
| nickdothutton wrote:
| It is partly for this reason that I'm funding an invite-only
| community shell/pubnix system.
 
| invokestatic wrote:
| I used to reach for shell scripts to configure servers, then
| Puppet, then Salt, and then finally to Ansible. Configuring
| servers declaratively is such a massive improvement over shell
| scripts. The fact that Ansible is agentless is also very nice and
| works very well for when you only have a handful of servers.
| 
| Only thing I dislike is YML, which I think is yucky!
 
  | jhickok wrote:
  | Out of curiosity, have you tried tools like Pulumi? I've never
  | used it but as a longtime Ansible user it's something that has
  | my interest.
 
  | SkipperCat wrote:
  | We took the same path, using config management tools to
  | automate our deployments. But after a while, we realized that
  | the servers only existed to run apps, and those apps could be
  | declaratively described as containers and the whole thing
  | pushed to Kubernetes.
  | 
  | That was our 'perfect world'. Reality was different and we
  | still have a lot of servers running stuff, but what we did push
  | into K8s really reduced our operations workload and we're
  | pretty happy about that.
 
  | ff317 wrote:
  | It's a leaky abstraction, though. The problem is that many
  | systems people that were raised only on these abstractions lack
  | the depth to understand what's under the hood when those other
  | layers do unexpected things.
 
    | NikolaeVarius wrote:
    | I dont think there is much abstraction about a apt get
    | command and waiting for exit 0
 
| pier25 wrote:
| I know how to setup a basic VPS with firewalls, nginx, etc, but
| I'm scared to death to do that for a production environment
| available online.
 
| quickthrower2 wrote:
| > Compare this reality with cloud services. Building on top of
| them often feels like quicksand -- they morph under you, quickly
| deprecate earlier versions and sometimes shut down entirely.
| 
| This rings true to me. On Azure anyway. Like the rest of tech you
| gotta keep up on the hamster wheel! Example: they canned Azure
| Container Services because of k8s - just imagine if you tightly
| integrated with that and now you meed to rewrite.
| 
| Also not mentioned in the article is cost. Hertzner is loved on
| HN for this reason.
| 
| That said k8s is probably a stable and competitive enough
| platform it makes a good tradeoff and by using it you invest in
| ops skills rather than specifically sys admin and I believe k8s
| skills will be long lasting and less fadish than proprietary
| vendor cloud skills.
 
| solatic wrote:
| Why can't people just pick the right tool for the job? The truth
| behind these managed services is that, for the correct usecases,
| they are VERY cheap. And for the wrong usecases, they are
| RIDICULOUSLY expensive.
| 
| Most businesses have nightly cronjobs generating some kind of
| report that is then emailed to stakeholders. Why on Earth would
| you run a dedicated Linux box for that anymore? Glue a nightly
| trigger to AWS Lambda, send the report via AWS SES, and it's
| free. Literally, because it fits quite easily within the free
| plan. No $5/month VPS box, no patching, no firewalling, no phone
| calls from execs at 6 AM wondering why their report isn't in
| their inbox and you track it down to the server being down when
| the cronjob was supposed to fire.
| 
| With that said, if you come to me and tell me what you want to
| add a new business feature to stream video for our customers off
| AWS, I'll first ask you why didn't you tell me you won the
| lottery, then I'll berate you for being stupid enough to spend
| your lottery winnings on the AWS bill.
| 
| Pick the right tool for the job.
 
  | joshstrange wrote:
  | This is the real truth. People complain about certain services
  | and theirs costs (Lambda being one I've heard) but I have a
  | full side project that runs on lambda with extremely bursty
  | traffic and it couldn't be more perfect. If I had sustained
  | activity I might look into something else but it really does
  | come down to picking the right tool for the job.
 
| TacticalCoder wrote:
| A very weird thread that degenerated into: _" PaaS vs self-
| hosted/self-owned hardware"_.
| 
| I'm pretty sure most people sysadmin'ing their Linux servers are
| actually doing it with rented dedicated servers. TFA btw
| specifically mentions: _" don't manage physical hardware"_. Big
| companies like Hetzner and OVH have hundreds of thousands of
| servers and they're not the only players in that space.
| 
| They don't take care of "everything" but they take care of
| hardware failure, redundant power sources, Internet connectivity,
| etc.
| 
| Just to give an idea: 200 EUR / month gets you an EPYC 3rd gen
| (Milan) with shitloads of cores and shitloads of ECC RAM and a
| fat bandwith.
| 
| And even then, it's not "dedicated server vs the cloud": you can
| have very well have a dedicated server and slap a CDN like
| CloudFlare on your webapp. It's not as if CloudFlare was somehow
| only available to people using an "entire cloud stack" (whatever
| that means). It's the same for cloud storage / cloud backups etc.
| 
| I guess my point is: being a sysadmin for your own server(s)
| doesn't imply owning your own hardware and it doesn't imply
| either "using zero cloud services".
 
  | tormeh wrote:
  | > it doesn't imply either "using zero cloud services"
  | 
  | Enter big-three cloud egress pricing. Designed to make sure
  | that you have to go all-in.
 
  | LoveGracePeace wrote:
  | I generally agree, I have a cheap AWS Lightsail VPS (mainly for
  | email hosting since my ISP blocks port 25 because I'm a
  | "consumer" and they want to protect the world for consumer
  | spammers) but also for flexibility. I like that the Internet is
  | not at my doorstep (no open ports at home). So, cheap VPS,
  | Wireguard and my home machines to serve whatever I want. I
  | don't pay extra if I use a ton of CPU or disk IO, for example.
  | 
  | Here is my Wireguard server (cheap VPS) and client (my home
  | servers) config:
  | 
  | # # Client (the actual self-host local server) #
  | 
  | [Interface] ## This Desktop/client's private key ## PrivateKey
  | = redacted
  | 
  | ## Client ip address ## Address = 10.10.123.2/24
  | 
  | [Peer] ## Ubuntu 20.04 server public key ## PublicKey =
  | redacted
  | 
  | ## set ACL ## AllowedIPs = 0.0.0.0/0
  | 
  | ## Your Ubuntu 20.04 LTS server's public IPv4/IPv6 address and
  | port ## Endpoint = redacted:12345
  | 
  | ## Key connection alive ## PersistentKeepalive = 15
  | 
  | # # Server (in the Wireguard context, exposed to the Internet)
  | #
  | 
  | [Interface] ## My VPN server private IP address ## Address =
  | 10.10.123.1/24
  | 
  | ## My VPN server port ## ListenPort = 12345
  | 
  | ## VPN server's private key i.e. /etc/wireguard/privatekey ##
  | PrivateKey = redacted
  | 
  | PostUp = iptables -i eth0 -t nat -A PREROUTING -p tcp --dport
  | 80 -j DNAT --to-destination 10.10.123.2 # Add lines for more
  | ports if desired
  | 
  | PostDown = iptables -i eth0 -t nat -D PREROUTING -p tcp --dport
  | 80 -j DNAT --to-destination 10.10.123.2 # Add lines for more
  | ports if desired
  | 
  | [Peer] ## Desktop/client VPN public key ## PublicKey = redacted
  | 
  | ## client VPN IP address (note the /32 subnet) ## AllowedIPs =
  | 10.10.123.2/32
 
| rob_c wrote:
| why is this not required at interview for sysadmins?
| 
| it would up the status of the industry overnight if everyone was
| at this level...
 
| 0xbadcafebee wrote:
| I still do not understand how anyone can become a software
| engineer and not stop to learn how an operating system or network
| works. I would have gone crazy if I'd never learned how network
| protocols work, or how web servers work, or virtual memory, or
| schedulers, security models, etc.
| 
| It's like manufacturing tires without knowing how an engine
| works. Don't you want to know how torque and horsepower affect
| acceleration and velocity? How else will you know what forces
| will be applied to the tires and thus how to design for said
| forces?
 
| thepra wrote:
| I would argue to you don't even need to put that much effort into
| learning bash scripting, you can totally get away with knowing
| systemd, journalctl, nginx, apt, ssh and docker and how to run
| them through bash.
| 
| Everything else is per-software files configuration and running
| commands from the software setup documentation.
| 
| Plus, I would run a server with a DE simply because I want to be
| able to look into databases with a GUI and do config files
| editing with a nice text editor.
 
  | dane-pgp wrote:
  | > knowing systemd, journalctl, nginx, apt, ssh and docker and
  | how to run them through bash.
  | 
  | Or, the way things are going, systemd, systemd[1], systemd[2],
  | systemd[3], systemd[4] and systemd[5].
  | 
  | [1]
  | https://www.freedesktop.org/software/systemd/man/journalctl....
  | [2] https://www.freedesktop.org/software/systemd/man/systemd-
  | jou... [3]
  | https://www.freedesktop.org/software/systemd/man/systemd-mac...
  | [4] https://www.freedesktop.org/software/systemd/man/systemd-
  | log... [5]
  | https://www.freedesktop.org/software/systemd/man/systemd-nsp...
 
| VWWHFSfQ wrote:
| Regular SA and DBA jobs will be almost completely gone within a
| decade or so. Same as there are hardly any auto mechanics anymore
| because nobody can fix any of the new cars but the manufacturer.
| 
| You'll only find those jobs at one of the handful of cloud
| companies. Nobody will know how to do anything for themselves
| anymore and all this experience and knowledge will be lost.
| 
| There are no more actual administrators. Just users paying rent.
 
  | tester756 wrote:
  | >Same as there are hardly any auto mechanics anymore because
  | nobody can fix any of the new cars but the manufacturer.
  | 
  | wait, what? definitely not in eastern eu
  | 
  | it seems like there's one mechanic per a few kms
  | 
  | but maybe due to the fact that average car is relatively old
 
  | pjmlp wrote:
  | They are called DevOps now, and management expects them to do
  | everything that isn't pure development including classical IT,
  | and also jump in to fix development if the respective dev is on
  | leave.
  | 
  | Yes, I know that isn't what DevOps is supposed to be, but we
  | all know how Agile turned out, management has a magic touch to
  | distort such concepts.
 
  | [deleted]
 
  | trabant00 wrote:
  | I've been hearing this for the past 20 years. And now my
  | sysadmin skills are more and more in demand. For the past 5
  | years or so I started making more money than a dev because of
  | supply and demand.
  | 
  | Rent to AWS actually drives demand up quite a lot since the
  | bills are huge and very few people understand what is under the
  | hood and how it can be optimized.
  | 
  | I doubt very much things will change in the near future. In the
  | far one... who knows.
  | 
  | Edit: car mechanics with their own shop make significantly more
  | money than me and it seems to only get better for them as cars
  | become more complex.
 
    | dymax78 wrote:
    | > Rent to AWS actually drives demand up quite a lot since the
    | bills are huge and very few people understand what is under
    | the hood and how it can be optimized.
    | 
    | A few years ago I participated in a Splunk deployment and the
    | cloud solution utterly dwarfed an in-house enterprise
    | solution, in regards to cost. Even in the event that cost was
    | irrelevant, certain sectors (financial institution(s)) are
    | going to have a difficult time pivoting to a cloud-based
    | solution and relinquishing control over the underlying
    | infrastructure.
 
    | Nextgrid wrote:
    | Out of curiosity, how do you find old-school sysadmin gigs? I
    | find that everything nowadays requires knowing the specifics
    | of a particular cloud and their managed services as opposed
    | to raw Linux or networking knowledge.
 
    | ugjka wrote:
    | The low level stuff won't magically disappear, you still will
    | need someone who can debug the kernel or whatever is under
    | the hood when shit blows up in everyone's face
 
| candiddevmike wrote:
| Blame the folks demonizing/shaming having "pet" servers and
| pushing immutable infrastructure. Linux server administration is
| quite enjoyable, and with how well apps these days can scale
| vertically, it really takes a special kind of workload to need
| (and actually saturate) fleets of servers.
 
  | notyourday wrote:
  | > Blame the folks demonizing/shaming having "pet" servers and
  | pushing immutable infrastructure. Linux server administration
  | is quite enjoyable, and with how well apps these days can scale
  | vertically, it really takes a special kind of workload to need
  | (and actually saturate) fleets of servers.
  | 
  | You don't need pet servers. Puppet or Ansible make your
  | baremetal cattle.
 
    | candiddevmike wrote:
    | I think most folks would argue "cattle" means using imaging
    | to manage/replace your fleet. Using something like puppet or
    | ansible against a fresh install implies a level of
    | individualism towards each system as they "may" have minute
    | details based on when puppet/ansible ran, even if they're
    | part of a dynamic inventory of some sort.
 
      | notyourday wrote:
      | I'm not following, sorry.
      | 
      | This is cattle:
      | 
      | * PXE boot server(s)
      | 
      | * Image contains masterless puppet bootstrap.
      | 
      | * Server(s) asks git - "give me the bootstrap for my mac
      | address"
      | 
      | * Server(s) gets a list of classes to apply.
      | 
      | * Server(s) applies classes.
      | 
      | Done.
 
      | [deleted]
 
      | [deleted]
 
  | 0xCMP wrote:
  | In my experience Pet servers are a good starting point (you
  | really should _graduate_ from Pet servers into all the various
  | immutable/cattle stuff), but it can quickly require discipline
  | from the Admins.
  | 
  | They can't be doing one-off undocumented config, package, and
  | network/firewall changes which make it impossible to setup
  | another server reliably. At $company I moved us to
  | Terraform+Packer (to get them used to immutable deploys, but
  | still just an EC2 instance) then Pulumi+Docker+Fargate so we
  | could fix our deployment velocity. The CTO was constantly
  | afraid everything would break; mostly cause it actually would
  | break all the time. Now basically anyone can deploy even if
  | they're not a SysAdmin.
  | 
  | That's not to say you can't automate a Pet Server, but it's a
  | lot more likely for someone to "just once" make some changes
  | and now you don't trust your automation. In our case we had
  | SaltStack and we were blocked by the CTO from running it unless
  | it was off-hours/weekend.
 
    | candiddevmike wrote:
    | Sounds like you need a new CTO.
 
      | 0xCMP wrote:
      | As turns out a lead developer can't unilaterally change the
      | CTO. Not sure how it works for you. I can control tech,
      | direction, etc. or move on to another job.
      | 
      | I chose to work with the CTO/Team to figure out a solution
      | everyone could live with. I even chose a more annoying
      | solution (Packer) initially just to make sure people felt
      | comfortable and avoid changing things anymore than I had
      | to.
 
    | NikolaeVarius wrote:
    | I find people dont know how amazing a "immutable" server
    | fleet is until you've experienced it.
    | 
    | It was so trivial to terminate and restart dozens of servers
    | at any given time since unless there was a mistake in the
    | cloud-init, we could bootstrap our entire infrastructure from
    | scratch within an hour.
    | 
    | It was amazing, never had to deal with something missing on a
    | server or a config being wrong in a special case. Dozens of
    | hosts just purring along with 0 downtime since the moment
    | anything became unhealthy, hosts would start auto-booting and
    | terminate the old instance.
 
  | bblb wrote:
  | I work in IT Operations of a big IT house. 100% local gov
  | customers. We fully manage around 5000 pet servers. ~30
  | sysadmins and some of us do architecture designing also.
  | There's also a separate networking team of about 10 network
  | specialists.
  | 
  | Senior sysadmins are really hard to come by today, not to
  | mention someone who wants to do architecture also.
  | 
  | My hunch is that the 5000 onprem pet servers are not going away
  | any day soon, because a massive amount of it is legacy systems
  | that take a long time to migrate to cloud, if ever. Also the
  | work stress is just ridiculous. So much stuff to do, even with
  | automation. Only reason I still do this is that I like the "old
  | school" tech stack vs. cloud IaaS/PaaS alternatives.
 
    | readingnews wrote:
    | >>Senior sysadmins are really hard to come by today, not to
    | mention someone who wants to do architecture also.
    | 
    | I am not so sure... I am a well seasoned sysadmin, been doing
    | server, network, architecture. I consider myself a solid
    | linux/network expert and have managed datacenters. When I
    | look for a new/more exciting job, or for a pay raise, all I
    | see are "cloud, AWS, devops". I never see "old school"
    | sysadmin jobs e.g. as you say, we have a room full of linux
    | boxes and we manage them with ansible/scripts/etc, but we
    | design and maintain them ourselves, come join our team".
 
  | Karrot_Kream wrote:
  | I've never seen anyone demonize or shame having pet servers, if
  | anything people on tech news sites write about their pet
  | servers constantly (understandably, it's fun!) Just like you're
  | not going to make all your furniture by hand when you start a
  | business but instead just buy whatever works from Ikea,
  | likewise you make a more conscious decision to build or buy (as
  | TFA touched on) based on the constraints of the business. And
  | sometimes a business, for compliance reasons for example, may
  | choose to keep their servers in-house, in which case you could
  | potentially be a sysadmin.
 
| warent wrote:
| When my SaaS app started scaling, I saw how badly cloud can be
| priced if you have even slightly unusual use-cases. It occurred
| to me that instead of spending ~$600/mo on GCP, I can invest in a
| $3000 PowerEdge server with much better hardware, run it out of
| my home office, and it pays for itself in less than a year.
| 
| Running your own server is an investment that doesn't make sense
| for everyone. If you can get it, it is better than you might
| imagine. Being in full control--the master of your own destiny--
| is so liberating and empowering. It feels the difference between
| constantly ordering Lyft/Uber/riding with friends, vs. owning
| your own car.
| 
| Not to mention, again, my hardware resources are so much better.
| This one server can run multiple profitable SaaS apps /
| businesses and still have room for experimental projects and
| market tests. Couldn't be happier with my decision to get off the
| cloud.
 
  | pmarreck wrote:
  | Does it have a backup schedule (and did you prove your restore
  | process works)? Is it replicated to another physically-offsite
  | location? Do you have to manage your own security keys? Load
  | balancing? Multi region availability? How do you admin it
  | remotely? Firewalled? Notifications of emergency situations
  | like low disk space, downages, over-utilization of bandwidth,
  | memory leakage, SMART warnings, etc.? What's your version
  | upgrade strategy? What's your OS upgrade strategy? Failover?
  | IPv6? VPN access? DMZ?
  | 
  | Basically, I think cloud provides a loooooot of details that
  | you have to now take on yourself if you self-host (at least if
  | you want to do it "legitimately and professionally" as a
  | reliable service). It's not clearly a win-win.
  | 
  | That all said, I recently canceled my cloud9 dev account at
  | amazon because the resources I needed were getting too
  | expensive, and am self-hosting my new dev env in a VM and
  | accessing it from anywhere via Tailscale, so that's been nice.
 
    | oceanplexian wrote:
    | > Does it have a backup schedule (and did you prove your
    | restore process works)? Is it replicated to another
    | physically-offsite location? Do you have to manage your own
    | security keys? Load balancing? Multi region availability? How
    | do you admin it remotely? Firewalled? Notifications of
    | emergency situations like low disk space, downages, over-
    | utilization of bandwidth, memory leakage, SMART warnings,
    | etc.? What's your version upgrade strategy? What's your OS
    | upgrade strategy? Failover? IPv6? VPN access? DMZ?
    | 
    | So yes, for those of us who have done Systems Administration
    | as a lifestyle/career, yeah you do all of those things and
    | it's part of the fun. I started doing OS upgrades,
    | monitoring, firewalls, and home backups of my own Linux
    | Servers some time in High School. Over-utilization of
    | bandwidth isn't really a "problem" unless you're doing
    | something weird like streaming video, a 1Gbps circuit can
    | support thousands upon thousands of requests per second.
 
  | aledalgrande wrote:
  | I'm just curious, because to me it seems a little bit
  | unrealistic.
  | 
  | How do you handle traffic spikes, especially from the
  | networking point of view? What kind of connection do you have?
  | How do you make your service as fast for all customers around
  | the world (saying you have a succesful Saas). How do you
  | prevent a local blackout from taking down your service? Where
  | do you store your backups, in case your building gets flooded
  | or your machine blows up? What would you do in case a malicious
  | process takes over the machine? These are some things that are
  | managed in a cloud environment.
  | 
  | I understand investing in a datacenter rack where you own your
  | hardware, if you have the skills, but running it in a home
  | office cannot support a successful business nowadays IMO.
 
    | exdsq wrote:
    | In my first job as a system tech for an IT company in 2014 we
    | had a backup process run at 17:30 and whichever admin left
    | last would take the backup HDD with them home lol. It worked!
    | There was also onsite redundancy with replicated windows
    | servers in an office across the street, which was enough.
    | Simpler times even just 8 years ago!
 
      | bamboozled wrote:
      | Which is ok if there isn't a local disaster which wipes out
      | your office and your friends?
 
    | warent wrote:
    | CenturyLink provides me gigabit internet on a business
    | connection. I get traffic spikes of ~100/rps and it's no
    | problem. Could probably easily handle another order of
    | magnitude or two. Local blackouts are mitigated with a UPS
    | https://en.wikipedia.org/wiki/Uninterruptible_power_supply
    | 
    | To be fair, I'm not 100% off the cloud. Backups are on an
    | hourly snapshot thru Restic https://restic.net/ and stored in
    | Google Cloud Storage off-prem in case of catastrophes. Also,
    | my Postgres database is hosted in Cloud SQL because frankly
    | I'm not feeling experienced enough to try hosting a database
    | myself right now.
    | 
    | It's really not as unrealistic as most people seem to think.
    | People have been building online businesses for years without
    | the cloud. Believing it's suddenly not possible is just their
    | marketing going to work for them making them new customers
    | imo.
 
    | kxrm wrote:
    | I've been doing my own hosting for 20 years, this just
    | doesn't happen often enough to concern myself with it.
    | 
    | You need to disassociate yourself from the start-up mindset
    | when you DIY a side project app or site. Having said that,
    | there are ways to cache and improve your write performance
    | and maintain HA on a budget. The only thing that's hard to
    | replicate in self-hosting is a high performance global
    | presence.
 
    | TacticalCoder wrote:
    | > How do you handle traffic spikes, especially from the
    | networking point of view?
    | 
    | I don't know about GP but managing your own server doesn't
    | mean you cannot use a CDN with your webapp.
 
      | aledalgrande wrote:
      | A CDN wouldn't be enough if the traffic spike involves
      | writes.
 
        | giantrobot wrote:
        | Oh no a website went down! Oh, wait, that's not an
        | emergency. Where did the idea that every site and service
        | _needs_ five nines availability come from? A side project
        | goes down or is read only for a few hours. Who gives a
        | shit? It 's not an insulin pump or nuclear control rods.
        | It's ok for people to be mildly inconvenienced for a
        | short period of time.
 
  | deepspace wrote:
  | > I can invest in a $3000 PowerEdge server with much better
  | hardware
  | 
  | And when some component of the server fails, your app is
  | unavailable until you can repair it. So you need another server
  | for redundancy. And a load balancer. And a UPS. And a second
  | internet connection.
  | 
  | If your app is at all critical, you need to replicate all of
  | this at a disaster recovery site. And buy/run/administer DR
  | software.
  | 
  | And hardware has a limited lifespan, so the $3000 was never a
  | one-time investment.
  | 
  | I think there is often still a case to be made for self-hosting
  | but the numbers are not as rosy as they seem at first glance.
 
    | kxrm wrote:
    | I am not the guy you replied to, but I also self host my web
    | apps. I think every project is different and not all projects
    | demand near 100% uptime. I certainly strive for HA for my
    | projects but at the appropriate budget and my users
    | understand.
    | 
    | If you are trying to go commercial you might have a different
    | attitude but for those of us who do this mostly for fun and
    | for some donations on the side, over complicating our setups
    | to ensure we add a 10th of a percent to our uptime stats just
    | isn't worth it.
 
      | warent wrote:
      | This is an important point. My customers don't love outages
      | (who does?) but I've had them and it doesn't really hurt
      | that badly. My products aren't that critical. They're
      | understanding as long as you communicate.
 
        | exdsq wrote:
        | Plus they still happen on AWS (or other critical bits
        | like GitHub) so you're not immune anyway
 
    | senko wrote:
    | > And when some component of the server fails, your app is
    | unavailable until you can repair it.
    | 
    | So you have some downtime. Big deal. If this happens once
    | every few years and you need a day to repair it, your uptime
    | is still better than AWS.
    | 
    | Not just everyone hosts a realtime API millions of users
    | depend on every second of the day.
 
    | StillBored wrote:
    | So, just buy another and leave it as a hot (or cold) standby
    | in a different data-center. Or use AWS as the DR site an spin
    | it up only if the local HW fails.
    | 
    | This sounds expensive if your talking one server and vs a
    | year of AWS charges, but is a tiny bump if it turns out you
    | need to buy a dozen servers to replace a large AWS bill.
    | 
    | Plus, I think most people underestimate how reliable server
    | grade hardware is. Most of it gets retired because its
    | functionally obsolete, not because a power supply/whatever
    | fails. Which brings up the point, that the vast number of
    | failures with server grade hardware are on replaceable
    | components like power supplies, disks, SFP's, etc. Three or
    | four years out those parts are available on the secondary
    | markets frequently for pocket change.
 
      | bcrosby95 wrote:
      | Yeah. We run servers into the ground where I work. We have
      | around 20 of them. Average age is around 11 years old.
      | Oldest is around 18.
 
      | christophilus wrote:
      | > most people underestimate how reliable server grade
      | hardware is
      | 
      | This. And there are millions of dollars of cloud marketing
      | materials and programs that are at least partly to blame.
 
      | wwweston wrote:
      | > Or use AWS as the DR site an spin it up only if the local
      | HW fails.
      | 
      | Yep. This seems like the obvious setup to me:
      | 
      | 1) make the usual case as economical as possible (and
      | ownership and the associated control will probably help
      | here, unless you have to lease the expertise too)
      | 
      | 2) outsource the exceptional case (ownership is less likely
      | to matter here, and will matter for less time even if it
      | does)
 
    | dijit wrote:
    | You need to replicate in Cloud too, most people tend not to
    | because they think the cloud is magic, but it's computers and
    | computers can fail- even if they're someone else's.
    | 
    | Also "if some component fails or the app is critical" has a
    | lot of nuance, I agree with your sentiment but you should
    | know:
    | 
    | 1) Component failures in hardware are much rarer than you
    | think
    | 
    | 2) Component failures in hardware can be mitigated (dead ram,
    | dead PSU, dead hard disk, even dead CPUs in some cases: all
    | mitigated) The only true failure of a machine is an
    | unmitigated failure due to not configuring memory mirroring
    | or something' or a motherboard failure (which is extremely
    | uncommon)
    | 
    | 3) The next step after "single server" isn't "build a
    | datacenter", it's buying a couple more servers and renting
    | half a rack from your local datacenter, they'll have
    | redundant power, redundant cooling and redundant networking.
    | They'll even help you get set up if it's 2-3 machines with
    | their own hardware techs.
    | 
    | I do this last one at a larger scale in Bahnhof.
    | 
    | also, $3000 will get you about 3-5 years out of hardware, at
    | which point, yeah, you should think about upgrading, if for
    | no other reason than it's going to be slower.
 
      | Lamad123 wrote:
      | I don't know what they call that logical fallacy cloud
      | fanatics use when they say "if blah blah just make build
      | your own datacenter".
 
    | theodric wrote:
    | There's a big difference in service criticality between your
    | business website and your NAS full of pirated tentacle
    | hentai. Cases like the latter can accept extended outages,
    | and are very cost-effectively served by home-based infra.
 
  | 0xbadcafebee wrote:
  | Personally I wouldn't want to become an auto mechanic just to
  | drive my own car for my business, but you do you. (Makes sense
  | if you have a fleet of vehicles, but for one car?)
 
  | [deleted]
 
  | baybal2 wrote:
  | > it pays for itself in less than a year.
  | 
  | https://news.ycombinator.com/item?id=13198157
  | 
  | On one meeting we had a typical discussion with ops guys:
  | 
  | - "why wouldn't we optimise our hardware utilisation by doing
  | things a, b, and c."
  | 
  | - "hardware is crap cheap these days. If you need more
  | capacity, just throw more servers at that"
  | 
  | - "is $24k a month in new servers crap cheap by your measure?"
  | 
  | - "comparatively to the amount of how much money these servers
  | will make the same month, it is crap cheap. It is just a little
  | less than an annual cost of mid-tier software dev in Russian
  | office. We account only 12% increase in our revenue due to
  | algorithmic improvements and almost 80 to more traffic we
  | handle. A new server pays back the same month, and you and
  | other devs pay off only in 2 years"
 
    | jiggawatts wrote:
    | I've found this to be an unsuccessful approach in practice.
    | 
    | Performance is a complex, many-faceted thing. It has hidden
    | costs that are hard to quantify.
    | 
    | Customers leave in disgust because the site is slow.
    | 
    | No amount of "throwing more cores at it" will help if there's
    | a single threaded bottleneck somewhere.
    | 
    | Superlinear algorithms will get progressively worse, easily
    | outpacing processor speed improvements. Notably this is a
    | recent thing -- single threaded throughout was improving
    | exponentially for _decades_ so many admins internalised the
    | concept that simply moving an app with a "merely quadratic"
    | scaling problem to new hardware will _always_ fix the
    | problem. Now... this does nothing.
    | 
    | I've turned up at many sites as a consultant at eyewatering
    | daily rates to fix slow apps. Invariably they were missing
    | trivial things like database indexes or caching. Not Redis or
    | anything fancy like that! Just cache control headers on
    | static content.
    | 
    | Invariably, doing the right thing from the beginning would
    | have been cheaper.
    | 
    | Listen to Casey explain it: https://youtu.be/pgoetgxecw8
    | 
    | You need to have efficiency in your heart and soul or you
    | can't honestly call yourself an engineer.
    | 
    | Learn your craft properly so you can do more with less --
    | including less developer time!
 
  | rank0 wrote:
  | Do you have a static IP?
  | 
  | I have a homelab too but getting "enterprise grade" service
  | from comcast seems to be my biggest barrier to scaling without
  | leaning on aws.
 
    | Godel_unicode wrote:
    | Rent a $5 VPS, run a VPN tunnel from your lab to that box,
    | and run a reverse proxy on it. You'll get some additional
    | latency, but that's about it.
 
      | rhn_mk1 wrote:
      | Caution: you may end up with your packets blackholing on
      | the way for unknown reasons after temporary loss of
      | connectivity.
      | 
      | I think it might have something to do with the NAT
      | forgetting about my UDP "connection", but haven't found the
      | culprit yet.
 
    | qmarchi wrote:
    | Hey, you actually have a few options, notably, doing nothing!
    | 
    | Comcast doesn't actually change your public IP address
    | between DHCP renewals and thus it's effectively static. The
    | only time that it'll change is when the modem is powered off
    | for an amount of time, or the upstream DOCSIS concentrator is
    | powered off for maintenance or otherwise.
 
      | orf wrote:
      | So: arbitrarily and without warning.
 
      | mwcampbell wrote:
      | I would be more worried about violating the ISP's terms of
      | service. Running a business based on that seems pretty
      | precarious.
 
    | warent wrote:
    | I have a static IP address / gigabit thru centurylink
 
    | acoard wrote:
    | Dynamic DNS (dDNS) works here[0]. You have free services like
    | no-ip, and also most paid domain registrars support this. I
    | know both Namecheap and AWS Route 53 support it if you want
    | it at your own domain. Essentially, it's a cron curling with
    | an API key from the host machine, that's it. Works great in
    | my experience.
 
      | sirl1on wrote:
      | Keep in mind you will have a small downtime until the new
      | IP is registered. Also the cache TTL of your domain will be
      | very low, so your site will have a small loading time
      | penalty from time to time.
 
    | dfinninger wrote:
    | For a while with my home lab I cronned a python script to
    | look up my external DNS and update my domain in Route53 (and
    | Dyn before that). Worked out pretty well, I think it only
    | updated once or twice in two years.
 
    | kxrm wrote:
    | I wrote a script that basically hits an offsite DNS with my
    | latest IP. It's worked quite well. I think in the past 4
    | years I have had Comcast, they haven't changed my IP once.
 
| culopatin wrote:
| Since the post pretty much says "go out and do it!"
| 
| Does anyone have a good source of learning that is comprehensive
| and practical? I'm talking about a good guided book/tutorial on
| how to administer a server properly and what things one should
| know how to fix, not just how to set up Wordpress.
 
  | jefurii wrote:
  | When I learned this stuff I started with The Debian
  | Administrator's Handbook (https://debian-handbook.info) and an
  | O'Reilly book called Unix Power Tools. Since then I've read
  | handbooks for whatever servers/frameworks/apps I needed to use.
  | There was never one single source. I've also spent lots of time
  | googling error messages and reading Stack Overflow/Server Fault
  | and other sites.
 
  | cpach wrote:
  | This is a good start: https://www.ansiblefordevops.com/
 
  | b5n wrote:
  | I usually provide this as an intro:
  | 
  | http://www.linuxcommand.org/tlcl.php/
  | 
  | From there picking up configuration management should be pretty
  | straightforward.
 
| Gehoti wrote:
| I'm running rke2 (ranchers k8s solution) on my server.
| 
| This means I can run my own servers and the only thing they do is
| running rke2.
| 
| I can take out a node and upgrade the base is without issues or
| anything.
| 
| And still get all the benefits of a high quality cluster is (k8s)
| 
| I love it.
| 
| And yes it's easier in my opinion and more streamlined to install
| a storage software (openebs) on my rke2 cluster and backing up
| those persistent volume than doing backup for my hard drives.
| 
| And my expectation is that while it works already very very good
| that it only gets even more stable and easier.
 
| madjam002 wrote:
| Honestly after discovering NixOS I have a new found joy of
| administering Linux servers. It's easy and painless, everything
| is declarative and versioned, and new machines can be set up for
| new projects or scaling in a matter of minutes.
| 
| This "cattle not pets" mentality doesn't make sense for
| everything and is highly inefficient if the OS itself seamlessly
| supports immutable workloads and configuration.
 
| SkipperCat wrote:
| Time is money, and the more time I spend on infrastructure, the
| less time I spend on product. And thus is born the incredible
| demand of infrastructure as a service.
| 
| Thankfully one person's cloud is another person's on prem
| infrastructure so sysadmin skills will always be in demand.
| 
| From my perspective in enterprise computing, I now see people
| taking 2 paths. One where they become super deep sysadmins and
| work on infra teams supporting large scale deployments (cloud or
| not) and the other being folks who write code and think of infra
| as an abstraction upon which they can request services for their
| code.
| 
| Both are noble paths and I just hope folks find the path which
| brings them the most joy.
 
  | unknown2374 wrote:
  | I find it very similar to how understanding of OS and hardware
  | fundamentals can make one a much better software engineer, how
  | infrastructure in the cloud/servers are setup helps make better
  | design decisions.
  | 
  | At least in my experience, my hobby of maintaining my own home
  | server helped out immensely in my path in the industry due to
  | knowing what tools are available when working on multi-faceted
  | software designs.
 
    | aledalgrande wrote:
    | It does, but you don't wanna have to deal with it constantly,
    | if you want to be working on a lot of feature work as an
    | application developer.
 
      | unknown2374 wrote:
      | Definitely agree with you on that. Making use of layers of
      | abstraction and delegation is absolutely necessary when
      | working on more and more impactful work.
 
  | akira2501 wrote:
  | That's when it clicked for me.. comparing my hourly salary rate
  | vs. the cost of running these services "in the cloud." Entirely
  | eliminating "system administration" from my duties was
  | absolutely a net win for me and our team.
 
    | [deleted]
 
    | marcosdumay wrote:
    | > Entirely eliminating "system administration" from my
    | duties...
    | 
    | ... and adding "cloud administration".
    | 
    | What is it with people doing completely one-sided analysis
    | even when they experiment the thing by themselves? Is cloud
    | administration less time consuming than system
    | administration? That's not my experience, so I'm quite
    | interested on how it got so.
 
      | mark242 wrote:
      | > Is cloud administration less time consuming than system
      | administration?
      | 
      | Infinitely, and if you look at it from a startup lens it
      | only makes sense. One needs to point only at the recent
      | log4j incident. This is obviously a gigantic black swan
      | event, but even just ongoing security patching at the OS
      | level can be a full-time gig. There is absolutely no
      | substitution for being able to ship code to a platform that
      | just runs it and scales it for you.
      | 
      | Andy Jassey had a great slide a few years back at Reinvent,
      | when talking about Lambda -- "in the future, 100% of the
      | code that you write will be business logic". If you really
      | think about that: how many times have you had to write some
      | kind of database sharding logic, or cache invalidation, or
      | maintaining encrypted environment variables, whatever. That
      | idea that you can toss that -- and what that gives to
      | teams, not having to spend massive timesinks and budgets
      | and hiring and all of that on -- effectively -- solved
      | problems, you really start to understand how you can move
      | faster.
 
        | tormeh wrote:
        | > ongoing security patching at the OS level can be a
        | full-time gig
        | 
        | That's just a cronjob. I know some people don't like
        | doing it that way, but that's on them. I've seen this
        | work for years in production with minimal trouble.
 
        | christophilus wrote:
        | It's not even a cron job. It's a single apt install.
 
        | demosito666 wrote:
        | > in the future, 100% of the code that you write will be
        | business logic
        | 
        | The present reality of Lambda is quite different though.
        | Even though the code of the function itself is more or
        | less "business logic" (although this is a really
        | meaningless term when we're talking about known
        | programming languages and computers), the scaffolding
        | around it with Terraform/CloudFormation/Serverless/etc.
        | is substantial, riddled with quirks and is really time-
        | consuming to figure out and update. I don't think I spend
        | less time on this accidental complexity now when we have
        | most of our logic in Lambda, compared to the times when
        | we were just running Flask apps in a VM.
        | 
        | This is not to mention how hard one has to fight to
        | overcome the limitations of the runtime, e.g. adding some
        | "warmers" scripts to reduce cold-start latency (no,
        | provisioned concurrency doesn't help and is ridiculously
        | expensive). And then comes the bill after you
        | accidentally created invocation loop between two
        | functions.
 
        | mark242 wrote:
        | Of course -- you're still writing code to delete a key in
        | Elasticache within your lambda. You're writing yaml code
        | for deployments. Hence the "in the future" portion of
        | this slide.
        | 
        | The scale-to-X and scale-to-zero features of Lambda,
        | along with the guaranteed interface to your lambda with
        | predictable input and output requirements, is incredibly
        | empowering for an engineering team. I can absolutely
        | guarantee that we have spent far, far, far less time
        | maintaining our infrastructure than what we would need to
        | be doing if we had a big-buncha-EC2 setup.
        | 
        | Imagine that the environment issues get taken care of,
        | because Amazon has teams and teams and teams of engineers
        | who are working on just that. Cloudflare has the zero-
        | cold-start isolates. All these platforms are heavily
        | invested in making your development and deployment
        | experience as easy as it can be. Concentrate on writing
        | your code, and you'll reap the benefits.
 
      | UI_at_80x24 wrote:
      | Too many of these people can't/won't/don't see past
      | C-Panel.
 
      | jhickok wrote:
      | Can be. Paying for SaaS offerings like the gsuite or o365
      | is a great deal for say, 100 seats, instead of paying
      | someone to administer on prem email. "Cloud administration"
      | can be more work, less work, or about the same work as
      | classic systems administration. That's why carefully
      | running the numbers first should be necessary.
 
        | marcosdumay wrote:
        | Oh, sure. Offshoring email is much easier than running it
        | yourself.
        | 
        | The same isn't true for less standard kinds of service.
        | The more standardized something is the easiest it is to
        | decide what to hire, troubleshoot, and learn to configure
        | your options. The less standardized it is, the harder all
        | of those things become. VMs are very standard, email
        | servers are less so, but not by a huge margin. Web
        | accessible disk space and on-demand interpreters are
        | completely non-standardized and a hell to do anything
        | with.
        | 
        | Also, some services do need more upkeep than others.
        | Email is one extreme that requires constant care, file
        | storage and web servers demand much less attention.
 
    | 28304283409234 wrote:
    | Counterintuitively, engineers that run their own servers and
    | infra tend to gain a deeper understanding of what it takes to
    | provide an actual running service to end users. And therefor
    | they write software or at least there is better teamwork with
    | "devops" infra folks.
    | 
    | This is off course the highly subjective meaning of a
    | greybeard unixadmin.
 
      | dj_mc_merlin wrote:
      | I'd say that running infrastructure in the cloud still
      | requires the same deep understanding of what's going on
      | under the hood as running your on-prem infra. A lot of
      | annoying things are taken out: some stuff patches
      | automatically, some other things have an easier updating
      | procedure (and obviously the "physical" aspect is taken
      | care of).. but you still only get the basic elements for a
      | proper infrastructure. Your servers need an environment set
      | up, you need to build a network, add load balancing and
      | replication, monitoring etc. etc..
      | 
      | You can provision some of these things from cloud
      | providers, but your infra is going to go to shit unless you
      | actually understand what they're really providing you and
      | how to use it. If the only thing you can do is upload a
      | docker image to a cloud provider and click the "create
      | server" button, then that's not really infra work at all.
      | It's like Wix for sysadmins.
 
      | oceanplexian wrote:
      | It's also a competitive advantage. Look at Backblaze, their
      | business model simply wouldn't be possible on a cloud
      | provider.
 
| lrvick wrote:
| I have over 20 years of Linux/FreeBSD sysadmin experience ranging
| from universities to major silicon valley companies in both cloud
| and on-prem.
| 
| When it comes to companies I mostly support cloud these days but
| when it comes to me and my family I accept every downside and
| host as almost all of our digital lives in a 42u rack in a gutted
| closet in our house with static IPs and business fiber.
| 
| I know where our data lives and no one can access it without a
| warrant and my explicit knowledge. I also save myself several
| hundred a month in third party cloud provider fees to host the
| same services and can reboot upgrade or repair anything whenever
| I want, but in general no more maintenance than cloud servers . I
| also never end up with exciting bills when experiments are
| forgotten about.
| 
| You pretty much get all the pros and cons of home ownership. For
| me it is mostly pros. Also keeps me dogfooding all the same
| practices I recommend to my clients.
 
  | c2h5oh wrote:
  | I remember how surprised people were when I demoed a $200/month
  | bare metal server outperforming by a huge margin RDS MySQL
  | instance that they were paying something upwards of 16k/month.
  | 
  | IIRC we ended up using it as a disposable replica for some non-
  | real time but heavy operations.
 
    | akireu wrote:
    | It's 2022 and we're about to rediscover something we know for
    | 40 years already: mainframes are freaking expensive.
 
    | viraptor wrote:
    | Here's what the bare metal server didn't come with:
    | 
    | API access for managing configuration, version
    | updates/rollbacks, and ACL.
    | 
    | A solution for unlimited scheduled snapshots without
    | affecting performance.
    | 
    | Close to immediate replacement of identical setup within
    | seconds of failure.
    | 
    | API-managed VPC/VPN built in.
    | 
    | No underlying OS management.
    | 
    | (Probably forgot a few...) I get that going bare metal is a
    | good solution for some, but comparing costs this way without
    | a lot of caveats is meaningless.
 
      | senko wrote:
      | > Here's what the bare metal server didn't come with:
      | 
      | [bunch of stuff I don't need]
      | 
      | Exactly. Imagine paying for all that when all you need is
      | bare metal.
      | 
      | Now imagine paying for all that just because you've read on
      | the Internet that it's best practice and that's what the
      | big guys do.
      | 
      | Way back the best practice was what Microsoft, Oracle or
      | Cisco wanted you to buy. Now it's what Amazon wants you to
      | buy.
      | 
      | Buy what you need.
 
        | ozim wrote:
        | I am buying IaaS it is so nice to use VPS with snapshots
        | every 4 hours that I don't have to think about.
        | 
        | I don't care where those snapshots are stored and how
        | much space those take. In case I need to restore my IaaS
        | provider gives me 2-click option to restore - one to
        | click restore and 2nd to confirm. I sit and watch
        | progress. I also don't care about hardware replacement
        | and anything that connects to that. I have to do VPS OS
        | updates but that is it.
        | 
        | I do my own data backups on different VPS of course just
        | in case my provider has some issue, but from convenience
        | perspective that IaaS solution is delivering more than I
        | would ask for.
 
        | DarylZero wrote:
        | Snapshots every 4 hours? That doesn't sound impressive at
        | all. In 2022 that's laptop tier capability.
 
        | rovr138 wrote:
        | > Buy what you need
 
      | [deleted]
 
      | c2h5oh wrote:
      | Of course there are a lot of benefits of using hosted
      | databases. I like hosted databases and use them for both
      | work and personal projects.
      | 
      | What I have a problem with is:
      | 
      | - the premium over bare metal is just silly
      | 
      | - maximum vertical scaling being a rather small fraction of
      | what you could get with bare metal
      | 
      | - when you pay for a hot standby you can't use it as a read
      | only replica (true for AWS and GCP, idk about Azure and
      | others)
 
        | viraptor wrote:
        | > when you pay for a hot standby you can't use it as a
        | read only replica (true for AWS
        | 
        | I'm not sure what you mean here. At least for MySQL you
        | can have an instance configured as replica + read-only
        | and used for reads. Aurora makes that automatic /
        | transparent too with a separate read endpoint.
 
        | milesvp wrote:
        | A hot standby is not a read replica. It's a set of
        | servers running in a different Availability zone
        | mirroring current prod, that is configured to
        | automatically fail over to if the primary is offline.
        | It's been a few years since I personally set this up in
        | AWS, but at the time, those servers were completely
        | unavailable to me, and basically doubled the cost of my
        | production servers.
        | 
        | The fact that a hot standby is usually in some sort of
        | read-replica state prior to failing over is a technical
        | detail that AWS sort of tries to abstract away I think.
 
      | lnxg33k1 wrote:
      | fully agree with this, in my free time I would fully go for
      | the baremetal, but if I have to save money to a company, by
      | placing all the headaches that are solved by AWS then I
      | just say goodbye
 
      | Nextgrid wrote:
      | A lot of these aren't as important for something that's
      | fairly static and can't even be autoscaled or resized live
      | (excluding Aurora - I'm talking about standard RDS).
      | 
      | No access to the underlying OS can actually be a problem. I
      | had a situation where following a DB running out of disk
      | space it ended up stuck in "modifying" for 12 hours,
      | presumably until an AWS operator manually fixed it. Being
      | able to SSH in to fix it ourselves would've been much
      | quicker.
 
        | [deleted]
 
    | lnxg33k1 wrote:
    | Just out of curiousity but what did you get out of it? I mean
    | if they get to pick it, get the responsibility of AWS on you,
    | but do you also get to get paid those money after saving them
    | money? I mean the way I see it, is that 16k/month don't only
    | pay for the hardware, but also to keep headaches away and to
    | have someone to blame
 
      | c2h5oh wrote:
      | IIRC I got to move some heavy analytics queries, which
      | could not be ran on a read only replica (rollups) and
      | without hearing the word budget once. Main db remained on
      | RDS.
 
    | rad_gruchalski wrote:
    | I've picked up a second hand Dell r720 last year. Best
    | purchase in years. Paid EUR1500 for 48 cores and 192GB RAM.
 
    | papito wrote:
    | LOL. Priceless. Having these skills is very valuable. Us old
    | farts used to do a lot with what today would be called
    | "bootstrapped". Scarcity is no longer a "problem", except
    | that it is. Scarcity keeps you sharp, efficient, on the edge
    | - where you need to be. It's also - cheaper.
 
      | c2h5oh wrote:
      | Who would have guessed having local, low latency, high iops
      | drives would be better than VM using iSCSI-attached drive
      | for storage, right? ;-)
 
    | belter wrote:
    | Not a fair comparison and you know it. Now add to the $200
    | month bare metal server, the yearly salary of the 3 admins
    | you need to manage it. One as backup, one for day time, one
    | for the night plus a weekend rotation. Add to the admin
    | salaries social security, insurance and a margin of safety if
    | one is unavailable due to sickness.
 
      | [deleted]
 
      | bcrosby95 wrote:
      | > a $200/month bare metal server
      | 
      | For 1 bare metal server? You do know these things run on
      | electricity, right?
 
        | glutamate wrote:
        | https://www.hetzner.com/dedicated-rootserver/matrix-ax
 
      | tester756 wrote:
      | why you'd need 3 admins watching it 24/7?
      | 
      | ____________
      | 
      | That's how I've seen it working in datacenter:
      | 
      | cheapest junior admins (450$/month) were having night
      | shifts
      | 
      | and if something broke, then they were calling an engineer
 
      | aparks517 wrote:
      | Around here, you'd spread the work of one new server to
      | folks you've already got or spread the cost of new hires to
      | a large number of servers.
      | 
      | I think it's also worth considering that many outfits
      | wouldn't get good value from the 24/365 coverage you
      | propose and don't care to pay for it.
 
        | wang_li wrote:
        | Or you'd hire an MSP. Having a full time employee is a
        | greater level of service than anyone here is getting from
        | a cloud provider. Lots of folks can get by with an admin
        | who stops in once or twice a week, or remotes in and
        | takes care of the daily tasks on largely a part-time
        | basis.
 
      | [deleted]
 
  | sjtindell wrote:
  | Just curious, what data do you worry about a warrant for?
  | Asking to protect my own data.
 
  | dspillett wrote:
  | I assume/hope you have good, tested, off-site backups if the
  | important data & config...
  | 
  | I run my stuff from home too, though it is smaller scale than
  | yours currently. Off-site & soft-offline backups are on
  | encrypted volumes on servers & VMs elsewhere.
 
  | hnthrowaway0315 wrote:
  | This sounds very interesting. Do you have a blog describing
  | your experience just curious? Thanks for sharing~~
 
  | massysett wrote:
  | What services would you be paying "several hundred a month" for
  | - in what currency? In US dollars my personal cloud services
  | don't run more than $20 a month.
 
  | DizzyDoo wrote:
  | Woah, 42u, that's quite something! How many of those slots are
  | you using? What kind of cooling do you have to have for that
  | closet, and how is the volume for the rest of the house?
 
    | jlcoff wrote:
    | FWIW, I recently purchased maybe 21U worth of servers for
    | less than $2000. Mostly IBM 2U servers (all dual 6-cores /
    | 12-threads Xeon, most for spare parts), NAS (enough for
    | triple >12TB redundancy), and LTO-4 loaders and drives to go
    | in a rack I picked up for free locally which also came Cisco
    | switches.
    | 
    | I'm gonna run my private cloud merging 3 different un-backed
    | up physical computers, and migrate services off Google.
    | 
    | That's my second free 42U track, but the other was mostly
    | used as shelf space. I've got a third rack in my rusting in
    | my backyard with I bought for $200 originally intended to run
    | my former employer test infra, that I brought back home after
    | the laid us off.
 
    | thefunnyman wrote:
    | Not OP but you find a lot of these kinds of setups on Reddit.
    | Check out /r/homelab if you're interested. Even crazier is
    | /r/homedatacenter where you'll find dudes with full tape
    | libraries and everything. Super interesting to browse
    | through.
 
  | reacharavindh wrote:
  | You can also pick and choose how far you go with self
  | management. To me, as much fun as it is, I can't be bothered to
  | run the physical server in my home(not because I can't, but
  | because I won't). That's why my solution is to lease a bare
  | metal server on Hetzner, pay EUR 38/month and host all my
  | personal projects on it. This way I get a reliable network and
  | operation for my server and I still hold reasonable control
  | over my data and what I run.
 
  | sandworm101 wrote:
  | I'm in a similar boat. I had a server in a closet but switched
  | to a NAS a few years ago and haven't looked back. I do run into
  | people who thinks NAS is old hat, that everything should be
  | backed to a cloud somewhere. These are the people who think
  | they are watching 4k because they select that option on
  | youtube. I want to watch 4k without buffering, without
  | artifacts. NAS lets me do that. And when I go away for a few
  | days or weeks, everything is turned _off_. No data leaks
  | possible when the  "spinning rust" isn't spinning.
 
    | jlcoff wrote:
    | I also prefer 480p with a good scenario than a 4k-60fps turd
    | of the past 15 years. Still manage to have a hundred movie or
    | so below a TB.
 
  | Karrot_Kream wrote:
  | I'm curious how much you're paying for business fiber. Do you
  | have IP transit in your home, or is it just a business
  | connection?
 
    | Godel_unicode wrote:
    | 100mb FiOS Business is on the order of $70.
 
    | warent wrote:
    | Centurylink gigabit business fiber is about $140/mo, and a
    | block of 8 static IPv4s is $25/mo
 
  | bamboozled wrote:
  | > I know where our data lives and no one can access it without
  | a warrant and my explicit knowledge
  | 
  | As far as you know?
  | 
  | Your data is exposed to The Internet so someone could be
  | accessing it.
 
  | Yuioup wrote:
  | Yes but you have talent and a lifetime of experience, plus
  | space for a noisy 42u rack full of servers, but not everybody
  | does...
 
    | m8s wrote:
    | They never suggested otherwise. They simply shared their
    | current setup.
 
    | sseagull wrote:
    | People who have been woodworking for decades can own very
    | expensive tools so that they can create very complicated
    | things.
    | 
    | People who are experts in cars can own very expensive cars
    | and tools to tune them.
    | 
    | People who have been working in music can have very expensive
    | instruments and expensive headphones, microphones,
    | sequencers, etc.
    | 
    | We seem to be looking down on experienced "computer experts"
    | and wanting to take their tools away. It's been grinding my
    | gears lately.
 
      | cerved wrote:
      | bruh nobody's looking down on people running they're own
      | metal
      | 
      | Server hardware is fun but it's not trivial to manage, buy
      | or run.
      | 
      | So when someone talks about how they've managed servers for
      | 2 decades, own a house where they can install a 42 rack and
      | how much better it is than a hosted solution. A lot of
      | people rightly point out that this is hardly feasible for
      | most people
 
        | DarylZero wrote:
        | People don't need a 42U rack to put a server in their
        | closet though. You buy some surplus server on ebay and
        | throw it on a shelf, no big deal.
 
      | hackthefender wrote:
      | I don't think that's the point the commenter was making.
      | The analogous situation would be if someone posted that
      | they made their kitchen table from scratch, and the
      | commenter said that it's great but not everyone has a lathe
      | in their basement, so good that places like Ikea exist as
      | well.
 
    | bmurphy1976 wrote:
    | You don't need a 42u rack. You can run a cluster of Raspberry
    | Pi's hidden in your basement ceiling rafters like me.
 
      | ryukafalz wrote:
      | Short throw from that to this classic:
      | 
      | >  hm. I've lost a machine.. literally _lost_. it
      | responds to ping, it works completely, I just can't figure
      | out where in my apartment it is.
      | 
      | http://www.bash.org/?5273
 
      | Andrew_nenakhov wrote:
      | Racks are not that expensive, and are a _very_ good way to
      | keep things connected, powered, accessible and tidy.
      | 
      | Heaps of Pis in rafters will quickly turn into a cable
      | spaghetti hell tied into ugly knots.
 
        | [deleted]
 
    | zikduruqe wrote:
    | And a tolerant wife/spouse.
    | 
    | My closet of RPi's are quiet.
 
| zepearl wrote:
| I don't think that the core of the article is about pros&cons of
| the managed/unmanaged/virtualized/dedicated server/service
| approach, but about "why it would be a good idea to have your own
| dedicated or virtualized server (at least for a while), which is
| to assimilate know-how" (which can then be used in more abstract
| setups).
| 
| The total flexibility of such a server (compared to un/managed
| services) is a (great) bonus (not only at the beginning).
 
| LAC-Tech wrote:
| Full disclaimer, I'm very much not a sysadmin or devops guy.
| 
| However, every team I've been on recently has spent a lot of time
| struggling with gluing their AWS stuff together, diagnosing bugs
| etc. It didn't seem to save a heck of a lot of time at all.
| 
| I couldn't figure out AWS. But I could figure out how to host
| sites on a linux VPS.
| 
| So what's the story here - is serverless something that only
| makes sense at a certain scale? Because with tools like Caddy the
| 'old fashioned' way of doing seems really, really easy.
 
  | MattGaiser wrote:
  | A lot of it is lack of awareness of things like Caddy or any
  | other tools that simplify the process.
  | 
  | I did not know about it until I googled it right now. I have
  | spent days/even two weeks figuring out how to set up Nginx and
  | for all I know I did it terribly wrong. I paired it with other
  | tools that I do not even remember. But I would be starting from
  | scratch again if I needed to set another one up.
  | 
  | So a lot might come down to that. I was on a team that
  | transitioned from a owned server to cloud as one day one of the
  | test servers went down and after a week of trying, nobody knew
  | how to fix it. We realized at that point that if a server
  | caused a production error, we were utterly screwed as someone
  | who had left set it up and nobody had a clue where to begin
  | fixing it beyond reading endless tutorials and whatever came up
  | in Google searches.
  | 
  | The server infrastructure was cobbled together in the first
  | place and for a period was theoretically maintained by people
  | who didn't even know the names of all the parts.
  | 
  | At least with cloud, there is an answer of sorts that can be
  | had from the support team.
 
    | peterbabic wrote:
    | Even with Caddy, there are so many rabbit holes ro get down
    | into. My current one is rootless. I feel like in completely
    | different world compared to rootfull. Learned a ton though
 
  | bblb wrote:
  | Caddyserver is Apache/Nginx killer and we will talk about Caddy
  | in a couple of years as if it's been always the default ("why
  | did we kept fighting with apache/nginx all those years, silly
  | us"). Seriously. It's just a completely different way to think
  | about web servers and automation. I'm just amazed it took all
  | these years to emerge.
 
    | LoveGracePeace wrote:
    | I love Apache, do not love Nginx and won't be looking at
    | Caddy (it's written in Golang). Apache (even Nginx) are easy
    | to set up reverse proxy and can simultaneously serve static
    | content and serve as the https certificate main, as well as a
    | few dozen other things like load balancing and rate limiting,
    | etc.
 
  | lamontcg wrote:
  | > is serverless something that only makes sense at a certain
  | scale?
  | 
  | Other way around. With enough scale you should be able to make
  | hosting your own datacenter work.
  | 
  | The problem is that the people you hire tend to go off buying
  | too much Enterprise-class shit and Empire building and the
  | whole thing winds up costing 10 times as much as it should
  | because they want stuff to play with to resume stuff and to
  | share risk with the vendor and have them to blame.
  | 
  | Only thing Amazon did to build out their internal IT ops
  | exceptionally cheaply and eventually sell it as the AWS cloud
  | service was to focus on "frugality" and fire anyone who said
  | expensive words like "SAN". And they were ordered in no
  | uncertain terms to get out of the way of software development
  | and weren't allowed to block changes the way that ITIL and CRBs
  | used to.
  | 
  | I didn't realize how difficult that would be to replicate
  | anywhere else and foolishly sold all my AMZN stock options
  | thinking that AWS would quickly get out competed by everyone
  | being able to replicate it by just focusing on cheap horizontal
  | scalability.
  | 
  | These days there is some more inherent stickiness to it all
  | since at small scales you can be geographically replicated
  | fairly easily (although lots of people still run in a single
  | region / single AZ -- which indicates that a lot of businesses
  | can tolerate outages so that level of complexity or cost isn't
  | necessary -- but in any head-to-head comparison the "but what
  | if we got our shit together and got geographically
  | distributed?" objection would be raised).
 
  | shepherdjerred wrote:
  | I've used serverless on small personal projects where paying
  | for a VPS or EC2 instance would be cost prohibitive, e.g. would
  | I really want to pay $5/mo for a small throwaway weekend
  | project? Probably not.
  | 
  | But what if the cost is $.0001 per request? It becomes a very
  | convenient way to make all of my personal projects permanently
  | accessible by hosting on S3 + Lambda.
  | 
  | Even in large workloads it makes sense. Much of AWS is
  | migrating from instances to AWS Lambda. There are some
  | workloads where persistent instances make sense, but a lot of
  | common use cases are perfect for Lambda or similar serverless
  | technologies.
 
| chasil wrote:
| -"As for scripting, commit to getting good at Bash."
| 
| That advice can cause substantial headache on Ubuntu/Debian,
| where the Almquist shell is /bin/sh. This does not implement much
| of bash and will fail spectacularly on the simplest of scripts.
| This is also an issue on systems using Busybox.
| 
| A useful approach to scripting is to grasp the POSIX shell first,
| then facets of bash and Korn as they are needed.
| 
| -"As a practical goal, you should be able to recreate your host
| with a single Bash script."
| 
| This already exists as a portable package:
| 
| https://relax-and-recover.org/
| 
| -"For my default database, I picked MySQL."
| 
| SQLite appears to have a better SQL implementation, and is far
| easer in quickly creating a schema (set of tables and indexes).
 
  | cure wrote:
  | > That advice can cause substantial headache on Ubuntu/Debian,
  | where the Almquist shell is /bin/sh. This does not implement
  | much of bash and will fail spectacularly on the simplest of
  | scripts. This is also an issue on systems using Busybox.
  | 
  | At least for Debian and Ubuntu, that's why we start bash
  | scripts with #!/bin/bash, of course.
  | 
  | Your point is valid for Busybox, though.
 
    | chasil wrote:
    | > that's why we start bash scripts with #!/bin/bash
    | 
    | That will also fail spectacularly, as bash does not behave
    | the same when called as /bin/bash as it does when it is
    | /bin/sh.
    | 
    | I have principally noticed that aliases are not expanded in
    | scripts unless a shopt is issued, which violates POSIX.
    | 
    | Forcing POSIXLY_CORRECT might also help.
 
      | gh02t wrote:
      | I would assume if you put /bin/bash as your shebang that
      | you're expecting to get bash-isms. I think the problem
      | you're complaining about (which is a real one) is people
      | putting /bin/sh and expecting bashisms. Debuntu being
      | problematic here is more a side effect of bad practice.
 
        | chasil wrote:
        | Bash has a POSIX mode.
        | 
        | Knowing when to switch into and out of this mode, and
        | what impact it has, is a more advanced subject that
        | should not burden those learning the Borne family.
        | 
        | It is better to start with Almquist, or another pure
        | POSIX implementation, with documentation specific to
        | standard adherence.
        | 
        | More advanced shell features should wait.
 
        | gh02t wrote:
        | I have mixed feelings. On paper I agree with you, people
        | should start with POSIX shell. In practice I'm not sure
        | how relevant that is anymore. I'm not really convinced
        | _bash_ should be the default thing people learn, but I
        | think there 's a decent argument that people should just
        | start off with the shell they're going to actually use.
        | You _should_ , however, be aware that there _is_ a
        | distinction, and if you 're learning/writing bash and not
        | POSIX shell you should specify /bin/bash not /bin/sh. But
        | you don't necessarily need to know all the nuances of how
        | bash differs unless you have a need to write POSIX
        | compliant shell.
 
        | chasil wrote:
        | For me, I just want a shell that works.
        | 
        | Without a nuanced understanding of standards, extensions,
        | and platform availability, new bash users will get large
        | amounts of shell usage that doesn't work.
        | 
        | To avoid that frustration, learn POSIX. That works
        | everywhere that matters.
 
      | ericbarrett wrote:
      | Not sure what you are saying, bash behaves as bash when
      | invoked as /bin/bash, and Bourne-shell-ish when invoked as
      | /bin/sh. Lots more detail in the man page.
      | 
      | I've never seen use of aliases in a bash script...? They
      | are generally for CLI convenience.
 
        | chasil wrote:
        | Alias expansion within scripts is mandated by POSIX.
        | 
        | When bash is not in POSIX mode, it violates the standard.
        | $ ll /bin/sh       lrwxrwxrwx. 1 root root 4 Nov 24 08:40
        | /bin/sh -> bash            $ cat s1       #!/bin/sh
        | alias p=printf       p hello\\n            $ cat s2
        | #!/bin/bash       alias p=printf       p world\\n
        | $ ./s1       hello            $ ./s2       ./s2: line 3:
        | p: command not found
 
        | ericbarrett wrote:
        | That's very nice, it's POSIX-compliant when invoked as
        | #!/bin/sh and _sane_ when invoked as #! /bin/bash --
        | exactly what I'd want.
 
        | chasil wrote:
        | If you want a portable script, then you don't want that
        | behavior.
 
        | jjnoakes wrote:
        | Sure but someone putting #!/bin/bash at the top of their
        | script, written for non-POSIX bash, won't have that
        | issue...
 
    | netr0ute wrote:
    | Why would anyone want to target bash specifically which
    | doesn't exist in all systems instead of just sticking to
    | what's implemented in /bin/sh?
 
      | cure wrote:
      | Because you're not going to have a great time with /bin/sh
      | (i.e. dash or the like) if you want to do anything more
      | than very, very basic scripts.
 
        | netr0ute wrote:
        | Relying on bash is a recipe for non-portability.
 
        | lamontcg wrote:
        | If you're not publishing your scripts and you're running
        | your own infrastructure, you probably don't care about
        | portability at all.
 
        | chasil wrote:
        | None of the BSDs use bash in their base. Apple recently
        | switched from bash to zsh. OpenBSD uses a descendent of
        | pdksh.
        | 
        | Another major user of a pdksh descendent is Android
        | (mksh), with a truly massive install base.
        | 
        | Some of the bash problem, besides portability, is GPLv3.
        | That was a major factor for Apple. I don't want my script
        | portability linked to corporate patent issues. For this
        | and other reasons, I don't use bash-specific features,
        | _ever_.
 
      | massysett wrote:
      | You don't really know exactly what you'll get with /bin/sh
      | - you might get bash trying to behave like sh, you might
      | get dash. At least with /bin/bash you're hopefully getting
      | bash. Now you just have to wonder what version...
 
  | zokier wrote:
  | > That advice can cause substantial headache on Ubuntu/Debian,
  | where the Almquist shell is /bin/sh. This does not implement
  | much of bash and will fail spectacularly on the simplest of
  | scripts.
  | 
  | That's not really a problem as long as you use #!/bin/bash
  | shebang, and there is nothing wrong in doing that.
 
    | NexRebular wrote:
    | Unless bash lives in /usr/local/bin/bash
 
      | nathanasmith wrote:
      | #!/usr/bin/env bash
 
      | chungy wrote:
      | and then you use "#!/usr/bin/env bash"
 
  | [deleted]
 
  | notyourday wrote:
  | > That advice can cause substantial headache on Ubuntu/Debian,
  | where the Almquist shell is /bin/sh.
  | 
  | #!/bin/bash
  | 
  | There, I fixed your "what shell is /bin/sh" problem.
 
    | chasil wrote:
    | Unless you actually look at real Linux deployments, which
    | are:                 #!/bin/mksh
    | 
    | Android doesn't allow GPL code in userland, and the installed
    | base is massive.
 
      | notyourday wrote:
      | > Android doesn't allow GPL code in userland, and the
      | installed base is massive.
      | 
      | You aren't administering Android devices.
      | 
      | Stop obsessing about writing portable scripts. Write
      | scripts for the targets that you are going to run them on.
 
  | johnchristopher wrote:
  | I settled with using /bin/sh for portability. If there is
  | something that can't be done with sh but can be done with bash
  | then it means a python script is better anyway. I don't want to
  | deal with bashism and ksh and deb/ub/rh different takes on
  | bash.
  | 
  | It's frustrating that most google search results and shell
  | script search results on SO almost always mean bash and sh.
 
    | jenscow wrote:
    | Yes, well if a script I wrote has somehow ended up on a
    | machine without bash, then I'd be more worried about other
    | assumptions the script makes.
 
      | johnchristopher wrote:
      | That's missing the point.
      | 
      | Some server I know don't have vim. Traefik docker image is
      | running ash and not bash. Tomcat image hasn't vim. Etc.
      | /bin/sh is there. No worry about assumptions. No bashism,
      | no fish, no zsh.
 
        | jenscow wrote:
        | That's fine. My scripts tend to run on a single machine..
        | otherwise, probably the same/similar Linux distro.
        | 
        | So for me, if there's not even bash then I've also surely
        | not accounted for other peculiarities on the system.
 
  | encryptluks2 wrote:
  | > -"As a practical goal, you should be able to recreate your
  | host with a single Bash script."
  | 
  | I disagree with this. A single bash script configuring an
  | entire hosts can be overly complex and very difficult to
  | follow. As someone who has created complex bash scripts, this
  | will become very time consuming and prevent you from making
  | many changes without significant efforts. I'd suggest
  | familiarizing yourself with tools like cloud-init and Ansible.
 
    | [deleted]
 
    | jenscow wrote:
    | My take on that, was the host creation should be simple
    | enough to only require bash.
 
| jvalencia wrote:
| Having scaled up various business initiatives, and working
| through countless scaling issues, I would recommend managed
| services like anyone else with experience...
| 
| However! When I spin up my own side projects. It is sooo much
| easier to just go into the command line and spin something up
| directly --- it does make me wonder whether some small amount of
| expertise can really change things. By the time your
| orchestrating AWS services, docker containers, kubernetes and
| more --- Would it have been so bad to run a 10 line bash script
| on few cheap VMs to set yourself up?
| 
| Even typing that, I realize how much time managed services saves
| you when you need it. Change management is really what those
| services offer you - even if a momentary setup is easier by hand.
 
  | locusofself wrote:
  | I totally agree. I recently set up a service using docker,
  | terraform, and AWS fargate. It was interesting, but everything
  | felt like such an abstraction. Firing up a VM and running the
  | app would have taken me as little as 10 minutes vs a multiple
  | day research project. Or using ansible would have taken maybe a
  | couple hours.
 
___________________________________________________________________
(page generated 2022-01-28 23:00 UTC)