| [HN Gopher] Show HN: Shadeform - Single Platform and API for Pro...
___________________________________________________________________
Show HN: Shadeform - Single Platform and API for Provisioning GPUs
Hi HN, we are Ed, Zach, and Ronald, creators of Shadeform
(https://www.shadeform.ai/), a GPU marketplace to see live
availability and prices across the GPU market, as well as to deploy
and reserve on-demand instances. We have aggregated 8+ GPU
providers into a single platform and API, so you can easily
provision instances like A100s and H100s where they are available.
From our experience working at AWS and Azure, we believe that cloud
could evolve from all-encompassing hyperscalers (AWS, Azure, GCP)
to specialized clouds for high-performance use cases. After the
launch of ChatGPT, we noticed GPU capacity thinning across major
providers and emerging GPU and HPC clouds, so we decided it was the
right time to build a single interface for IaaS across clouds.
With the explosion of Llama 2 and open source models, we are seeing
individuals, startups, and organizations struggling to access A100s
and H100s for model fine-tuning, training, and inference. This
encouraged us to help everyone access compute and increase
flexibility with their cloud infra. Right now, we've built a
platform that allows users to find GPU availability and launch
instances from a unified platform. Our long term goal is to build a
hardwareless GPU cloud where you can leverage managed ML services
to train and infer in different clouds, reducing vendor lock-in.
We shipped a few features to help teams access GPUs today: - a
"single plane of glass" for GPU availability and prices; - a
"single control plane" for provisioning GPUs in any cloud through
our platform and API; - a reservation system that monitors real
time availability and launches GPUs as soon as they become
available. Next up, we're building multi-cloud load balanced
inference, streamlining self hosting open source models, and more.
You can try our platform at https://platform.shadeform.ai. You can
provision instances in your accounts by adding your cloud
credentials and api keys, or you can leverage "ShadeCloud" and
provision GPUs in our accounts. If you deploy in your account, it
is free. If you deploy in our accounts, we charge a 5% platform
fee. We'd love your feedback on how we're approaching this
problem. What do you think?
Author : edgoode
Score : 34 points
Date : 2023-08-17 17:07 UTC (5 hours ago)
|
|
| Takennickname wrote:
| Surprisingly little engagement with this post. I'm not in the
| market but can people who use gpus but didn't find their offering
| attractive explain why?
| thecupisblue wrote:
| First off, the color and the font of the hero look so neat
| together. Just giving straight up simple, professional but modern
| vibes. Good job whoever picked it!
|
| Now, regarding the product - this is amazing. From both the
| perspective of saving time and money digging through providers to
| the part that actually I find the most impacting - the
| simplification of the AWS console mess to a niche use case. While
| I understand GPU's are the hot thing now and there is a scramble
| for a single Flop, if you ever decide to pivot, I'd gladly pay
| more money each month to use such a simplified niche AWS/Generic
| cloud console.
|
| Can't wait to have a chance to play with this more, keep up the
| good work and good luck!
| Cholical wrote:
| Thank you for the kind words! I'm Ronald, one of the cofounders
| of Shadeform.
|
| Simplifying provisioning instances in AWS is definitely one of
| our goals! With our current AWS integration, we are setting up
| a VPC networking stack so all users have to do is worry about
| picking their instance. We also hope to integrate more cloud
| features and managed services that will make this a fully-
| fledged cross cloud console.
| doctorpangloss wrote:
| - SSH access isn't super useful. If I have to author a
| bootstrapping script for my system it's too much friction.
|
| - the people who thrive at this use orchestration, like Slurm or
| Kubernetes. So the nodes I buy should join automatically to my
| orchestration control plane.
|
| - people who don't use orchestration or don't own their
| orchestration will not run big jobs or be repeat customers. It
| doesn't make sense to use nonstandard orchestration. I understand
| that it is something that people do, but it's dumb.
|
| - so basically I would pay for a ClusterAutoscaler across clouds.
| I would even pay a 5% fee for it automatically choosing the
| cheapest of the fungible nodes. I am basically describing
| Karpenter for multiple clouds. Then at least the whole offering
| makes sense from a sophisticated person's POV: your Karpenter
| clone can see eg a Ray CRD and size the nodes, giving me a firm
| hourly rate or even upfront price to approve.
|
| - I wouldn't pay that fee to use your control plane, I don't want
| to use a startup's control plane or scheduler.
|
| - I'm not sure why the emphasis on GPU availability or blah blah
| blah. Either AWS/GCE/AKS grants you quota or it doesn't. Your
| thing ought to delegate and automate the quota requests, maybe
| you even have an account manager at every major cloud for that to
| bundle it all.
|
| - as you probably have noticed, the off brand clouds play lots of
| games with their supposed inventory. They don't have any
| expertise running applications or doing networking, they are ex
| crypto miners. I understand that they offer a headline price that
| is attractive but for an LLM training job, they "vast"ly
| overpromise their "core" offering.
|
| - if you really want to save people money on GPUs, buy a bunch of
| servers and rack them and sell a lower hourly rate.
| [deleted]
| edgoode wrote:
| Thank you for the feedback. We're still early in this and are
| planning on moving in some of the directions you mentioned.
|
| - We agree that moving towards 'Karpenter for multiple clouds'
| would be more valuable for some use cases and hope to support
| that feature soon.
|
| - We do help customers with one-off quota requests, and it is a
| feature we want to bake into our platform on top of aggregating
| demand in our accounts. Many companies with AWS/GCE/AKS quota
| still cannot reliably get on-demand instances due to capacity
| shortages.
| doctorpangloss wrote:
| Yeah I mean I'm sure you look at Karpenter and think "well it
| does everything for free, and the code to choose the cheapest
| node would be straightforward." Kubernetes already has
| sophisticated scheduling algorithms that could consider price
| as a constraint.
|
| I can't say what will people actually pay for, because CTOs
| and engineers are penny pinchers, they will go through a lot
| of pain to pay $0. They are the worst customers. IMO most
| allegedly B2B Y Combinator offerings are really B2C in
| disguise, selling productivity apps and pretty interfaces to
| 22 year olds with busy schedules of Bumble swiping who happen
| to work as developers and PMs at big enterprises. Because the
| senior people I know with the real budgets, they look at a
| thing and think "I'd program this with my headcount to save a
| 5% fee." This is coming from someone who does charge a
| royalty only because it is customary in my business to do so.
|
| People who spend money love their pricing "formatted" a
| certain way. CTOs love it to be formatted as "free" with a
| bunch of trickle priced exorbitant usage gotchas (Snowflake).
| They don't love prices formatted as royalties. Time will tell
| of course.
|
| Anyway, most use cases don't even make sense, they are deep
| in the negative for ROI. Most enterprises cannot do software
| R&D like LLM model training or even serving. The biggest
| success story in town uses Kubernetes. I'm not sure if
| there's space for 10 more control planes to run on top of
| your control planes, they add a lot of complexity for little
| gain.
|
| A bunch of Kubernetes manifests to fine tune LLaMA 2 on a
| dataset hosted in blob storage on DGX machines is a
| commodity. People think it's sensitive, there's a bonanza for
| people who can author that YAML, it's inevitable that someone
| will release a proper multi node training job with vanilla
| resources. Yet here we are, with a dozen "free" trickle
| priced weird CRD control plane-esque products obscuring this.
| edgoode wrote:
| Here are two demos of provisioning and reserving GPUs through our
| platform:
|
| Provisioning: https://www.youtube.com/watch?v=7WyKPMS80Pk
|
| Reservations: https://www.youtube.com/watch?v=Ab5GmfMYWKA
| 71a54xd wrote:
| Any plans to add providers like TensorDock or Vast?
| edgoode wrote:
| We're working on adding those as well.
| mike_d wrote:
| Be super careful inserting yourself as a reseller of GPUs
| (ShadeCloud).
|
| You'll quickly find that your platforms primary use is to turn
| stolen credit cards into cryptominers.
| latchkey wrote:
| crypto doesn't use gpus to mine any more. after ETH switched to
| PoS, the whole gpu mining world was decimated (thankfully).
| even on a free tier, you'd be lucky to get a few dollars a day,
| for a whole lot of upfront work.
|
| that said, i agree that you do have to be careful reselling
| anything... people will find nefarious uses, it just isn't
| mining anymore.
| Cholical wrote:
| Appreciate the feedback and will definitely watch out!
| mike_d wrote:
| I'm not referring to normal users that are trying to generate
| ROI. When your actual cost is $0, even GPU mining Monero or
| shit coins is cash flow positive and relative low risk.
|
| A "few dollars a day" is good money for people in some parts
| of the world.
| latchkey wrote:
| I didn't discount what you said at all. I just clarified
| that mining is less of a concern these days. It isn't even
| a few dollars a day at this point, it is pennies.
|
| Easy to mitigate with credit card signup and individual
| approval.
|
| Go try to get an account with CoreWeave and you'll see what
| I mean.
| marcopicentini wrote:
| It's like Cloud66 but with the GPU in the headline, isn't? What's
| difference with Cloud66?
___________________________________________________________________
(page generated 2023-08-17 23:00 UTC) |