|
| twblalock wrote:
| You can try to build a monolith that is modular enough to break
| up later. But I have never seen it happen, and I've been around
| for a while now.
|
| What actually happens, 100% of the time in my personal
| experience, is that you end up with both the old monolith and new
| microservices, the monolith never gets fully broken up, and now
| you need to support two development paradigms forever.
|
| Has anyone here ever seen a monolith be successfully 100% broken
| up into microservices?
| groestl wrote:
| Not only broken up, but merged again after a while as well.
| hfjcic7 wrote:
| mtkd wrote:
| Any team starting with microservices on an unvalidated concept
| likely hasn't built a big project from the ground up before
|
| If it's a small engineering team there is nothing more optimal
| than working on a big scappy vertical codebase in the early
| stages
|
| In the fortunate situation you need to start scaling -- breaking
| that out into MS later is usually low effort and fun
|
| If you break it up to early you often end up with logic ghettos
| forming in the wrong stack that become near impossible to
| relocate later
|
| I was talking to a startup last year who were hiring several
| hundred engineers to build a handful of microservice stacks in
| anticipation of the traffic they may get at launch (success
| expected because of previous unrelated founder experience), and
| wanting to make it easier to deploy vast engineering resource on
| it -- they've still not launched anything
| eanghel wrote:
| > breaking that out into MS later is usually low effort and fun
|
| In my experience, breaking down 5+ year old / multiteam
| monoliths has always been painful, frustrating, and required
| huge efforts, especially when nobody remembers why certain
| things are the way they are. In addition, getting enough
| support for business to do it was hard. I really find it quite
| surprising to hear someone share the opposite opinion and
| wonder what kind of environment and the size of the project you
| were in.
| alecthomas wrote:
| 100% agreed there. I've never seen this go smoothly, and
| usually the monolith lives forever, slowly shedding
| functionality but never quite disappearing.
| [deleted]
| revskill wrote:
| The issue is not microservice is complicated. It's about CI/CD
| complexity.
| janef0421 wrote:
| I think an important thing to keep in mind when discussing
| monoliths or Microservices is that you can build modular code
| without needing multiple binaries, processes, or server
| instances. If you follow best practices when creating a monolith,
| creating well-defined modules with clear input and output
| boundaries, then it should be relatively easy to split those
| modules into separate programs and create ways for those modules
| to communicate, if it later becomes necessary to move towards a
| micro-services architecture.
| KyeRussell wrote:
| I think a (but not the only) central conceit of a microservices
| architecture is that most development teams cannot be trusted
| to maintain proper separation of concerns, and enforcing at a
| technical level the inability to call Any Old Function is a big
| part of what you're buying into.
| eeperson wrote:
| This always seemed strange to me. If your team can't be
| trusted not to make spaghetti in a monolith, what stops them
| from making distributed spaghetti in microservices? In theory
| the extra work of making an API call would give you smaller
| bowls of spaghetti. However, once you add some abstraction to
| making these calls it seems like developers are empowered to
| make the same mess. Except now it is slower and harder to
| debug.
| anon23anon wrote:
| I 100% agree w/ this and play an enterprise architect type role
| at my employer but I would totally get lambasted if I proposed
| this. Honestly a lot of the ppl I work w/ aren't developers and
| never were. They just have big mouths and climbed the ranks.
| Sadly they're the ones who write my reviews and could get me
| canned so I have to play ball.
| rroot wrote:
| Been there, done that, caused a burnout after a long time.
| 0/10, wouldn't recommend. Pay was good though.
| basicallybones wrote:
| Microservices vs. monoliths is a false dichotomy in the present
| day.
|
| If you put microservices in a monorepo with a good build tool
| (like NX), put the common auth/logging/types/dtos/etc. functions
| in reusable libs, and version/release all the apps together, you
| get the best of both worlds.
|
| At a small scale, you can deploy your whole containerized stack
| on two big HA instances (monolithic infrastructure, so much
| easier). You can leverage common libraries without versioning
| hell/cross-repo coordination. If you have mediocre developers on
| the team and very aggressive development timelines, the monorepo
| structure helps enforce modular organization. If you are a good
| developer, your workflow gets very, very fast, and you can manage
| a lot of complexity.
|
| Add: additionally, if you have ever-changing requirements and
| some are obviously bolt-on one-offs that aren't going anywhere,
| you can limit damage to the code quality by splitting those
| features into a separate microservices (making it much easier to
| safely deprecate/remove them in the future).
| twblalock wrote:
| A monorepo of microservices is the best pattern, but only if
| you have a dedicated team that keeps the monorepo buildable,
| builds tooling for it, and enforces best practices and the
| right culture. If you don't do that, you will end up with a
| huge mess -- I've seen it happen.
|
| For companies that can't dedicate the resources to do a
| monorepo properly, a repo per team is the best approach. The
| true value of microservices is decoupling teams so they can
| move independently without blocking each other.
|
| Also, needing to release the services together, or in a certain
| order, is a very bad and unscalable pattern. Teams need to be
| able to move independently. This requires a commitment to
| avoiding breaking API changes no matter what kind of repo
| structure you use -- and for the love of God, never let more
| than one service access a database table! A table should only
| ever have one service that accesses it, and API boundaries need
| to be enforced as the only way other services get to the data.
| Do those things and you will be better off no matter what repo
| structure you use.
| basicallybones wrote:
| These are very good points. A few responses:
|
| - The monorepo tooling I prefer at startup scale (NX) does
| have a learning curve, but it has been pretty easy to
| maintain using automation (good linting, good build/test
| pipeline, etc.). For me, learning and leveraging the right
| monorepo tooling is way easier than having to enforce
| consistency across large numbers of repos with a small team
| or watch a low-tooling monolith decay into tightly-coupled
| spaghetti. I am also in a particular situation where the need
| for eventual scale is obvious, will come very quickly
| (clients are big), and the thought of having to rush a scale-
| inexperienced team (management and devs!) through a monolith-
| to-microservice migration on a tightly coupled codebase while
| meeting SLAs is so unpleasant that it makes me a bit sick to
| my stomach to think about it.
|
| - The monorepo framework/tooling matters a lot. For instance,
| NX is small-scale friendly and largely can be overseen by
| senior Typescript devs. Bazel, on the other hand, requires a
| hefty context switch and has a very steep learning curve.
|
| - You are right that releasing all services together does not
| scale past a certain point. My point is that microservice
| monorepos let you pivot quickly between all-together
| (monolithic) releases and individual app releases as
| appropriate for your scale. With good caching and parallel
| blue/green deployments, adding new services to an all-at-once
| build/deploy pipeline just uses a bit more compute for the
| pipeline without meaningfully impacting the pipeline run
| times.
|
| - Having to release services serially in a certain order
| obviously indicates something is seriously wrong under the
| hood.
|
| - You are very right about monorepo tooling/best practices,
| but I would extend that to any project that eventually will
| need to scale. Someone has to take ownership and enforce good
| practices. Spaghetti code can be harder to prevent in a
| feature-heavy monolith and certainly is harder to deal with
| for me if it is in many different repos (i.e., several
| monoliths).
|
| - You can have more than one monorepo for teams that are very
| different or have specific needs (i.e., different languages,
| mobile apps, etc.).
|
| - I wish I were in a situation where the people around me had
| a commitment to avoiding breaking API changes, but it just is
| not so because of aggressive development timelines. All-
| together versioning/releasing helps deal with (planned,
| intentional) breaking API changes very efficiently, because
| (as long as you can tolerate a failed API call once in a
| while as a blue/green deployment switches traffic) you're
| basically performing the deployment as if it is a monolith.
| theptip wrote:
| The assertion that monoliths are taboo is a bit odd; "monolith
| first" has been pragmatic best practice for years:
| https://martinfowler.com/bliki/MonolithFirst.html.
| mirekrusin wrote:
| Exactly, Sam Newman says the same thing in "Building
| Microservices" book.
|
| I don't know where people got this idea but I've noticed the
| less experienced, the more noise they make about it.
| ch4s3 wrote:
| I think this depends on who you listen to, but I'm a huge fan
| of the monolith and wouldn't break one up unless I has a
| specific tech need(eg some weird library), a performance issue
| unaddressable in the monolith, or big organizational people
| problem and in that order.
| miroz wrote:
| Ditto. You should start with a monolith with a vision of how to
| break it down into microservices (if ever needed).
|
| When my company got a contract for a new project, my colleague
| created a prototype, a bare-bone solution that had 9 web projects
| that communicated over REST, because microservices. I suggested
| starting with monolith. Guess whose design was accepted because
| it was more sexy. It never got to production for multiple
| reasons, but part of it was that development was awfully slow.
| Orchestrating changes across services when development is in flux
| is extremely hard.
| jacurtis wrote:
| I think more startups need to consider building monoliths first.
|
| Micro-service architecture definitely has the advantage at scale,
| but the advantage of monoliths is the speed that you can build
| and improve them. Microservices require much more planning and
| architecture discussions compared to monoliths.
|
| Microservices also can have incredible performance improvements
| over monoliths (being able to scale or tailor each micro service
| to its' specific needs as opposed to "on size fits all" of
| monoliths). So yes they can be right-sized in your cloud easier
| and have tailored performance, but it comes at the complexity of
| the deployment and management of the system. By comparison
| monoliths can sometimes be managed without a dedicated
| SRE/DevOps/CloudEngineering team.
|
| Monoliths tend to be cheaper to deploy (up to a certain point).
|
| Startups should consider building on monoliths more often. For
| proof of concept and MVP tools, monoliths are the way to go. Solo
| developers also are much better served with monoliths compared to
| micro services. But I will say, eventually if you scale big
| enough it will eventually make sense to leave monoliths behind
| and move towards micro services. Yes there are exceptions
| (someone will write, "but XYZ company is worth 9 gajillion
| dollars and runs on a monolith), but generally speaking I
| consider starting with a monolith and moving to micro services
| after you hit market acceptance for your product.
| mirekrusin wrote:
| They are better at scale (hundreds of developers or machines
| needed to run the system).
|
| They are better at avoiding downtime during deployments because
| everything is built around supporting it - but at huge cost.
| Offline migrations - which may be taking just seconds/minutes -
| are so much easier to do. We're talking about order of
| magnitude difference in complexity of adding new features.
|
| In typescript world, monorepo with shared packages where some
| are dockerized services (all under same monorepo wide version)
| is more than enough for most projects. Microservice crowd will
| call it monolith (because services can't be deployed
| independently, they likely share SQL database etc), but it's a
| set of services. You can run it from docker/swarm/k8s -
| scalability is not really a problem here until you're HUGE
| (hundreds of developers or hundreds of machines to deploy to).
| Refactoring, adding new features, dropping stuff etc. - is all
| easy, usually type checker will guide you to all places that
| need changing. Single set of migrations. Straight forward e2e.
| Fast local development where you can spin the whole thing etc.
| Why people like to complicate their lives when they have this?
| mberning wrote:
| I have long running ruby and java apps that I use for this
| purpose. They have build scripts, test frameworks, etc. When I
| get an idea for something new I add them to one of these existing
| projects, but I do it in a way that I can easily excise it in the
| future.
| plugin-baby wrote:
| Megalith!
| afhammad wrote:
| Curious to hear more. Are these personal projects or serving
| users? What are some examples of unrelated ideas you've tacked
| on to existing app servers?
| BiteCode_dev wrote:
| And then on it because you most likely part of a project that
| will never have any requirement that needs anything else.
| jrochkind1 wrote:
| This is written like it's a novel take bucking the trend, but I
| feel like most things I have seen on this topic for at least
| several years now have the same observations and conclusions?
|
| If anyone wants to actually speak up for microservices, I feel
| like that's what needs a defense at this point!
| jameshart wrote:
| I think anyone who has taken a large monolith through a
| significant platform version upgrade or change, like .NET 2 to
| .NET 4, or Python2 to Python3, or Angular to React, would need
| a very persuasive argument to make them believe that starting a
| new project with a monolithic design was a good idea.
| KyeRussell wrote:
| I'd tell anyone that thought that that it's a very
| shortsighted view. If they had started with microservices
| there's a good chance the project wouldn't be around long
| enough to even go through that transition.
| jameshart wrote:
| So don't start with microservices. But also don't start
| with a monolith!
|
| If you have offline batch processes, don't make the mistake
| of implementing them in the same codebase as your website
| just because you already have the DB access and
| build/deploy tooling set up. If you have an admin portal
| and a public website that listen on different ports, don't
| run both listeners in the same process.
|
| 'don't build microservices yet' does not have to mean
| 'start with a monolith'.
| jolux wrote:
| Microservices to me are solving two problems: an organizational
| problem and a technical problem. The organizational problem is
| more obvious: well-factored monolithic codebases go poorly with
| large, interdependent groups of teams. The technical problem is
| that while it's true that you can write a monolith in a modular
| style, in my experience enforcing the single responsibility
| principle in a monolith demands a level of discipline that is
| not feasible to maintain indefinitely in an environment with
| personnel turnover. When architectural coupling requires making
| coordinated changes to multiple services, the barrier is high
| enough that people will tend to avoid it instead.
| eanghel wrote:
| Wholeheartedly agree, I also feel like people are
| overestimating how much discipline you can collectively have
| in an organization. And if you create a monolith with hard-
| enough boundaries to enforce modularization then you might as
| well create multiple services (and not necessarily the one-
| microservice-per-entity kind of architectures, just as many
| as the modules)
| vyrotek wrote:
| Completely agree! We build our .NET projects in a similar way
| too.
|
| My current SaaS company has 8 web projects and 1 core project.
| Some of the web projects are SPAs using BFF and others are APIs
| for customers or our mobile app. They aren't tiny sites. We're up
| to ~500 controllers. All in a single repo and automatically
| deployed as a monolith. We have a super small but very productive
| team which we attribute to this simple design.
| jjtheblunt wrote:
| BFF ?
| kroken wrote:
| Backend for frontend
| vyrotek wrote:
| Backends for Frontends. An alternative to making "one API to
| rule them all". Far less time spent trying to model an API in
| an abstract way that makes sense to many consumers.
|
| Instead, an API and the endpoints are designed for a specific
| client. (e.g. A Mobile App or SPA) For us, this also meant a
| more RPC based API where and reusability is managed after the
| network hop.
|
| https://samnewman.io/patterns/architectural/bff/
| rwoerz wrote:
| The whole monolith vs microservice discussion revolves around a
| false dichotomy and higly subjective and context-dependend
| definitions. For example, what if a monolith is integrated into a
| larger system landscape (e.g. due to an enterprise merger). Is it
| still a monolith?
| [deleted]
| adra wrote:
| A monolithic architecture isn't about being the one and only
| compute tier. It's about defining how many distinct things that
| any tier does, which is why microservices go well with
| partitioned teams. Smaller compute business domains means
| specialization and easily divisible work functions.
| jameshart wrote:
| Indeed. Most companies which are cited as 'running the whole
| system as a monolith' aren't _really_ a monolith. They don 't
| actually run their CMS, their online comment system, their
| payment processing, their payroll, their recruitment databases,
| their build pipelines, their bug tracker, and their hardware
| asset management system, all out of one codebase with a single
| RDBMS on the backend.
|
| People will laugh at that concept, but go look at how mainframe
| systems work, and you might be surprised how close some old
| banking systems are to that level of monolithicness. There
| really are companies who run essentially all of those systems
| off a single ERP platform. Or worse, on Sharepoint.
| [deleted]
| mirekrusin wrote:
| Microservices is too catchy name. It somehow implicitly means to
| people less complexity, but the reality is quite opposite. They
| should be called "a lot of little monoliths that we have to make
| always running and compatible with each other when updating"
| would reflect more appropriatelly what it is. Good analogy is
| replacing wheels for better ones while driving. In non
| microservice system you stop, change wheels and start driving
| again with new wheels. With microservices you need to do it while
| driving and it's going to be more complicated.
| rroot wrote:
| During the Covid pandemic, I watched from a distance a group of 6
| people start and destroy a project.
|
| After the 6 months, they didn't even have a a MVP.
|
| This was run on a 6 month fund from the local government. They
| didn't know each other but they all knew that after 6 months it'd
| be over unless they'd secure more funding.
|
| They spent the time faffing around, configuring things, splitting
| things, ... effectively making things harder for themselves.
| gw98 wrote:
| That's what we're currently working on.
|
| Only 20% of the work now goes into actually improving the
| product.
| actually_a_dog wrote:
| That may be better than 100% of the work going into improving
| the product, taking 1/5 the speed it should.
| gw98 wrote:
| We're still doing 100% of the work but with 20% of the
| quality.
| jameshart wrote:
| There are more than two architectures.
|
| I agree if you don't have organizational scale, microservices
| solve problems you don't have.
|
| But there are at least three separate things that are
| 'monolithic' about a classical monolith. 1) the codebase, 2) the
| database, and 3) the build and release process.
|
| And you can certainly modularize your codebase into libraries or
| independent modules, but retain monolithic builds and releases
| against a monolithic relational database - a 'modular monolith'.
|
| But I'm not sure that's the _interesting_ part to break up,
| especially if you are not trying to solve the organizational
| problems microservices help with.
|
| On the other hand, modularizing your build and deployment process
| could be helpful even if you are a solo developer - because it
| helps you reduce cycle times, and make smaller changes which
| means that you can trace bugs more quickly.
|
| And modularizing data access is better for security and reducing
| blast radius of bugs or errors.
|
| Modularizing the codebase seems much more like a problem you only
| need to tackle when your organizational complexity increases.
| mind-blight wrote:
| Similarly, I feel like microservices are implemented to help
| with "scale", but it's often fuzzy which metric is being
| scaled. You can scale
|
| 1. Number of developers in your org 2. Number of users on your
| platform 3 Amount of data or compute used
|
| Not all products need to scale all of these dimensions, and a
| microservice architecture may or may not help much. I'd argue
| that you probably get better results for #2 by optimizing
| database queries and implementing a good catching layer
| KyeRussell wrote:
| The core question "what is this in service of?" is what
| separates good developers from mediocre ones.
___________________________________________________________________
(page generated 2022-11-13 23:00 UTC) |