[HN Gopher] Investors are happy to pay premium for tech, but not...
___________________________________________________________________
 
Investors are happy to pay premium for tech, but not for AI
 
Author : alex1212
Score  : 111 points
Date   : 2023-07-31 14:48 UTC (8 hours ago)
 
web link (www.bloomberg.com)
w3m dump (www.bloomberg.com)
 
| rvz wrote:
| Investors already know that this is a race to zero. There are
| some companies in tech that are already at the finish line in
| this race, like Meta and can afford to release their AI model for
| free, undercutting cloud based AI models unless they also do the
| same.
| 
| They are also realizing that the many of these new 'AI startups'
| using ChatGPT or a similar AI service as their 'product' are a
| prompt away from being copied or duplicated.
| 
| The moat is quickly getting evaporated by $0 free AI models. All
| that needs to happen is for these models to be shrunken down and
| be better than the previous generation whilst still being
| available for free.
| 
| Whoever owns a model close to that is winning or has already won
| the race to zero.
 
| twelve40 wrote:
| I've seen some hype waves in my life, but it's probably the first
| one that truly unleashed the sleazy "influencer" types that
| regurgitate the same carousels they steal from each other. Even
| more intense than "crypto" now. That really kind of distracts
| from trying to gauge the meaning of this.
 
| akokanka wrote:
| Are investors still in cyber security startups or is that train
| long gone?
 
  | alex1212 wrote:
  | I dont think so but compliance seems to be getting hot. Ryan
  | Hoover listed "comply or die" as a hot space in his thesis
  | recently.
 
| happytiger wrote:
| This is explained by the simple idea that only a few companies
| are in an arms race to create a general purpose intelligence, and
| when they do all of the ai-powered systems will naturally
| consolidate or "become flavors" of this GPI AI.
| 
| So what substantive and defensible advantage is your money buying
| in the AI ethos when this effect is essentially inevitable?
| 
| Answer: not much.
| 
| So it's very logical that the team, book of business and the tech
| platform itself are what are driving valuations.
 
| zby wrote:
| I have the feeling that we are at the MRP stage
| (https://en.wikipedia.org/wiki/Material_requirements_planning)
| when companies started using computers but writing software to
| handle production processes was so new that nobody could write
| anything truly universal. The next will be the ERP stage where we
| know some abstractions that apply to many companies, companies
| like SAP can sell some software - but most money is in
| 'implementation' by consulting agencies.
 
| nkohari wrote:
| So many AI startups are really just paper-thin layers over
| publicly-available models like GPT. There's value there, but
| probably not enough to support $100M+ valuations.
| 
| We've barely scratched the surface of what generative AI can do
| from a product perspective, but there's a mad dash to build
| "chatbots for $x vertical" and investors _should_ be a little
| skeptical.
 
  | coffeebeqn wrote:
  | I'm fairly certain some are just prompt prefixes. Maybe a
  | lookup to some 1st or 3rd party dataset
 
  | qaq wrote:
  | So many startups are just a paper-thin layers over publicly-
  | available AWS services.
 
    | nkohari wrote:
    | The developer experience of AWS is so bad it creates a lot of
    | opportunity to provide value there. The same was also true
    | for Salesforce for a long time.
 
    | lispisok wrote:
    | Some other company's LLM being your secret sauce is different
    | than using AWS services to build your secret sauce on top of.
 
  | coding123 wrote:
  | That's my take too. Companies are spending money in the wrong
  | area. GPT and similar should be used to re-categorize all the
  | data, or used to enhance their existing UIs by surfacing
  | related information.
  | 
  | By just replicating a ChatGPT interface but for Your Taxes (TM)
  | it's really a huge slap in the face to computer users that
  | already can't tolerate typing data in.
 
| tamimio wrote:
| Good, I'm not the only one who's getting fed up with yet-another-
| chatbot or chat-with-your-files or whatever "startup".
 
  | alex1212 wrote:
  | No, we all are. Getting super tired by anything in the space
 
| rsynnott wrote:
| I feel like these hype cycles are getting quicker and quicker.
| 
| 2030, day 8 of the 17th AI boom: A starry-eyed founder shows up
| to a VC office with a pitch-deck for their GPT-47-based startup
| which automatically responds to Yelp reviews, only to be turned
| away; the VCs are done with that now, and will be doing robot
| dogs for the next week.
 
| twobitshifter wrote:
| I wonder how much of AI will be winner take all and how much will
| be value destruction. From an investor standpoint in LLM you have
| a privately held business leading the market and open source
| software following closely.
| 
| During the PC revolution you could buy apple hp and Microsoft and
| know that you were capturing the hardware market. Here we see
| Nvidia, AMD, Apple, and Microsoft (somewhat) looking like the
| major beneficiaries and the market is following that. Maybe it
| becomes a Omni-platform market and people rush into OpenAI once
| public.
 
| jasfi wrote:
| This is healthy skepticism and the acknowledgement that there are
| lots of free tools out there. You need to be much better than
| what's freely available. You need to persuade buyers to buy when
| they don't want to. I don't think any of that is new.
 
| rmbyrro wrote:
| http://web.archive.org/web/20230731171759/https://www.bloomb...
 
| reilly3000 wrote:
| What I have heard from YC folks was that typically the hard costs
| (GPU compute) and data moats of large players make the space
| virtually impossible for an upstart to make a meaningful
| difference that isn't immediately copied wholesale by a major
| player.
| 
| Software velocity is increasing. Investors should be considering
| what that means for their investments.
| 
| I would be worried if I were tied up in a company that depends on
| bloated professional services. LLM-enabled senior engineers are
| 100X more efficient and safe than brand new junior devs. These
| organizations that embrace the best people using the best tech
| ought to make Oracle and their famous billion dollar cost
| overruns quake in their boots.
 
| vasili111 wrote:
| While I think that some AI startups and new AI products will be
| successful I also think that from AI revolution mostly will
| benefit companies that will integrate new AI technologies in
| their existing product.
 
| gpvos wrote:
| https://archive.ph/qwSAH
 
| codegeek wrote:
| Seems like Investors are cautious and not getting on the hype
| train blindly (cough.. crypto/blockchain cough..). I think that
| is a good thing. AI has real use cases but currently it is going
| through the hype cycle especially with every Tom Dick and Harry
| starting an "AI Startup" which are mostly a wrapper around
| ChatGPT etc. I think in next 5-7 years, AI will stabilize and
| most of the "get rich quick" types would have disappeared.
| Whatever is left then will be the AI and its future.
 
  | mach1ne wrote:
  | Depends on what you mean by 'wrapper'. For most AI startups it
  | isn't viable to train their own models. For most customer use-
  | cases, ChatGPT interface isn't enough. Wrappers are currently
  | the only logical implementation of AI to production.
 
    | ska wrote:
    | This is approximately true at the moment - but it's an open
    | question how much that is worth to customers. The market will
    | sort it out, but it's not clear that all of these "wrapper"
    | startups have a workable business model.
 
      | mach1ne wrote:
      | True, especially regarding how easily their services can be
      | replicated. Their margins are low, and customer acquisition
      | does not provide them with network effects that would yield
      | a moat.
 
  | soulofmischief wrote:
  | ha ha, another "cryptocurrency has no real use cases but  does" post on HN, my favorite meme.
 
    | wiseowise wrote:
    | It's true, though.
 
      | soulofmischief wrote:
      | The extreme irony is that the automated web will largely be
      | used by AI to begin with, and the automated web is powered
      | by decentralized computational efforts such as smart
      | contracts and digital currency. It's like people completely
      | forget /ignore that cryptocurrency is a mathematical
      | problem still in its infancy.
      | 
      | If you think these aren't all fundamental units of the next
      | web, you're not thinking about it from the right
      | perspective. If you can't pick apart the real mathematical
      | utility behind crypto efforts from a generation of scammers
      | who hijacked a very real thing, then you just lack
      | understanding.
      | 
      | We are decades away from the most obvious solution but it
      | very likely involves cryptographically-backed digital
      | currency and smart contract systems used by automated
      | neural networks.
 
  | strangattractor wrote:
  | That is new in itself. When have VC's ever not jumped on the
  | hype wagon? The lemmings squad has FOMO for blood.
 
  | dehrmann wrote:
  | We might be on a hype train, but ChatGPT is already much more
  | useful than bitcoin ever was.
 
    | codegeek wrote:
    | I agree with you there.
 
    | LordDragonfang wrote:
    | I think that's precisely why the investors aren't as
    | interested - bitcoin had very little value by itself, so
    | investors got dollar signs in their eyes when a startup
    | claimed to be able to _add_ the value it was missing.
    | 
    | ChatGPT already has a _lot_ of value by itself, the value
    | _added_ by any startup is going to be marginal at best.
 
      | boredumb wrote:
      | This is absolutely correct. As soon as I was able to get
      | access I built my own... GPT proxy to generate marketing
      | copy and all that for people and while it was neat it comes
      | down to a regular crud application that has a wrapper
      | around an OpenAI API, the moat isn't there, the app was
      | alright but I realized pretty quickly my "value" being that
      | I'm basically using a template engine against a text prompt
      | - I probably shouldn't shut down my consulting business
      | over pursuing it.
 
      | tsunamifury wrote:
      | I think this is a good example of the VC mindset, but I
      | think it is also flawed on their part.
      | 
      | LLMs are a lot more like a generalized processor than
      | people are admitting right now. Granted you can talk to it,
      | but it becomes significantly more capable when you learn
      | how to program it -- and thats where the value will be
      | added.
 
        | LordDragonfang wrote:
        | >when you learn how to program it
        | 
        | I don't know if you mean, like, LoRAs and similar (actual
        | substantive changes), but the vast majority of "learning
        | how to program" LLMs (accounting for the majority of
        | startup pitches as well) is "prompt engineering" - which,
        | as the meme goes, isn't a moat. There's a skill to it,
        | yes, but if your singular advantage boils down to a few
        | lines of English prose, your product isn't able to
        | control a market - and VCs are (rightly) not interested
        | unless you have the possibility to be a near-monopoly.
 
        | svnt wrote:
        | > I don't know if you mean, like,  and
        | similar (actual substantive changes), but the vast
        | majority of "learning how to program" 
        | (accounting for the majority of startup pitches as well)
        | is " engineering" - which, as the meme goes,
        | isn't a moat. There's a skill to it, yes, but if your
        | singular advantage boils down to a few lines of , your product isn't able to control
        | a market - and VCs are (rightly) not interested unless
        | you have the possibility to be a near-monopoly.
 
        | tsunamifury wrote:
        | This is the error of the thinking. It would be like
        | saying software doesn't have a moat because thats just
        | clever talking to a processor.
        | 
        | But no one would say that now, thats ridiculous. There is
        | a sufficient degree of prompt engineering that is already
        | defensible, I'm already doing it myself IMO. You'll see
        | very sophisticated hybrid programming/prompting systems
        | being developed in the next year that will prove out the
        | case.
        | 
        | For example 30 parallel prompts that then amalgamate into
        | a decision and an audit, with 10 simulation level prompts
        | running chained afterwards to clean the output. These
        | types of atomic configurations will become sufficiently
        | complex to not be just for 'anybody'.
 
        | __loam wrote:
        | The output of that many chained prompts is probably so
        | unreliable that it's useless.
 
        | tsunamifury wrote:
        | Again, this is a misunderstanding of what LLMs are
        | capable of. These aren't chained, you can run parallel
        | prompts of 15 personas with X diversity of perspective,
        | that reason on a singular request, string, input or
        | variable, they provide output plus audit explanation. You
        | then run an amalgamation or committee decision (sort of
        | like mixture of experts) on it to output variable or
        | string. Then run parallel simulation or reflection
        | prompts based on X different context personas to double
        | check their application to outside cases, unconsidered
        | context, etc.
        | 
        | It's pretty effective on complex problems like Spam,
        | Trust and Safety, etc. And the applications of these sort
        | of reasoning atomic configurations I think are unlimited.
        | It's not just 'talking fancy' to an AI, its building
        | processes that systematically improve reasoning to
        | different very hard applied problems.
 
        | ironborn123 wrote:
        | Brave are those who have set out to make prompt engg an
        | entire industry, with gpt-5 and gemini lurking on the
        | horizon.
 
        | tsunamifury wrote:
        | Sam has specifically stated it's unlikely there will be a
        | GPT5, and likely GPT4 is just a deeply prompt optimized
        | and multimode version of GPT3.
        | 
        | But overall, hasn't that theme been true for like... all
        | tech ever? You have to set up and build your own
        | innovation path at some point.
 
        | matmulbro wrote:
        | [flagged]
 
        | dlkf wrote:
        | > the applications of these sort of reasoning atomic
        | configurations I think are unlimited
        | 
        | They are limited to applications in which the latency slo
        | is O(seconds), knowledge of 2021-present doesn't matter,
        | and you're allowed to make things up when you don't know
        | the answer.
        | 
        | There are, to be fair, many such applications. But it's
        | not unlimited.
 
    | [deleted]
 
  | EA-3167 wrote:
  | I think people are starting to realize that "AI" in the present
  | context is just the new vehicle for people who were yelling,
  | "NFT's and Cyrpto" just a year ago.
 
    | xwdv wrote:
    | I can't wait for these people to run out of "vehicles" and
    | face the reality.
 
  | bushbaba wrote:
  | I think it's more to do with high rate environment, with most
  | AI firms having no clear path to profitability. Where-as many
  | traditional tech venture rounds (now of days) have a solid
  | business model and current profitability per deal, using raised
  | capital to accelerate growth at current loss for long term
  | profit.
 
  | ben_w wrote:
  | Even with my rose-tinted glasses on about the future of AI,
  | it's not clear who will be the "winner" here, or even if any
  | business making them will be a winner.
  | 
  | If open source models are good enough (within the category of
  | image generators it looks like many Stable Diffusion clone
  | models are), what's the business case for Stability AI or
  | Midjourney Inc.?
  | 
  | Same for OpenAI and LLMs -- even though for now they have the
  | hardware edge and a useful RLHF training set from all the
  | ChatGPT users giving thumbs up/down responses, that's not
  | necessarily enough to make an investor happy.
 
    | rebeccaskinner wrote:
    | Early signals to me are that regulatory capture will end up
    | being the moat that gets used here. I think it's a horrible
    | outcome for society, but likely one that will make some
    | companies a lot of money. Early grumblings around regulation
    | for a lot of AI models seem at risk of making open source
    | models (and even open-weigh models) effectively illegal.
    | Training from scratch is also going to both remain
    | prohibitively expensive for individuals and most bootstrapped
    | startups, plus with more of the common sources of data
    | locking out companies from using training data it's going to
    | be hard for new entrants to catch up.
    | 
    | I personally think the only way AI will end up being a
    | benefit to society is if we end up with unencumbered free and
    | open models that run locally and can be refined locally.
    | Every financial incentive is pushing in the other direction
    | though.
 
      | benreesman wrote:
      | This should be one of the highest voted comments in all of
      | the AI threads this year.
      | 
      | Meta is no doubt doing this because it's in their best
      | interest, but if both the quality and licensing of LLaMA 2
      | start a trend that's a pretty effective counter-weight to
      | eyeball scanner world.
      | 
      | And there's other stuff. George Hotz is pretty unpopular
      | because he does kind of put the crazy in crazy smart (which
      | I personally find a refreshing change to the safe space for
      | relatively neurotypical people in the land of aspy nerds),
      | but tinygrad is a fundamentally more optimizable design
      | than its predecessors with an explicit technical emphasis
      | on accelerator portability and an implicit idealistic
      | agenda around ruining the whole day of The AI Cartel. And
      | it runs the marquee models. Serious megacorp CEOs seem to
      | be glancing nervously in his direction, which is healthy.
      | 
      | It's not locked-in yet.
 
    | serjester wrote:
    | Agreed look at Jasper - a year ago the 600lb gorilla in the
    | room and overnight their moat has dried up along with many of
    | their customers.
 
    | mahathu wrote:
    | I know nothing about AI or stocks, so please correct me if
    | I'm wrong here: Isn't NVIDIA a clear winner already (bar any
    | major technological advances allowing all of us to run LLMs
    | on our phones?) I just checked the stock on google and it
    | went up 200% since the beginning of the year!
 
    | emadm wrote:
    | Make models usable is really valuable, for Stability AI I
    | discussed business models with Sam Lessin here:
    | https://www.youtube.com/watch?v=mOOYJONenWU but basically the
    | edge is data and distribution given how widely used this
    | technology will be.
    | 
    | Open is its own area, proprietary general models are a race
    | to zero vs OpenAI and Google who are non-economic actors.
    | 
    | Most AI right now is just features tho, very basic without
    | the real thinking needed.
    | 
    | Next year we go enterprise.
 
  | alex1212 wrote:
  | Definitely a hype cycle at the moment. I am old enough that
  | this is my second ;)
 
    | hattmall wrote:
    | Second hype cycle or second AI hype cycle? If the latter when
    | was the first?
 
      | ska wrote:
      | Wikipedia has a summary of some of the earlier history here
      | https://en.wikipedia.org/wiki/AI_winter
 
      | dspillett wrote:
      | Not the OP, but I've been through a few AI hype cycles and
      | know of earlier ones, depending on what you count: ML more
      | generally over the last half decade or so1, the excitement
      | around Watson (and Deep Blue before it), the second big
      | bump in neural network interest in the mid/late 80s2, there
      | have been a couple of cycles regarding expert-system-like
      | methods over the decades, etc.
      | 
      | --
      | 
      | [1] though that has produced more useful output than some
      | of the previous hype cycles, as I think will the current
      | one as it seemingly already is doing
      | 
      | [2] I was barely born for the start of the "AI winter"
      | following the first such hype cycle
 
      | yxre wrote:
      | 1955 with the advent of the field with some very hopeful
      | mathematicians, but the research never produced anything.
      | 
      | 1980 after the foundations of neural networks, but it was
      | too computationally intensive to be useful
      | 
      | 2009 with Watson
      | 
      | https://www.hiig.de/en/a-brief-history-of-ai-ai-in-the-
      | hype-...
 
        | padolsey wrote:
        | Could this time be different? The tools are now in the
        | hands of the "masses", not behind closed doors or in
        | lofty ivory towers. People can run this stuff on their
        | laptops etc
 
        | sgift wrote:
        | Could yes? Will it be? I can tell you when you don't need
        | the answer anymore, i.e. in a few years.
        | 
        | It's the very nature of the hype cycle that it is very
        | hard to distinguish from a real thing.
 
        | timy2shoes wrote:
        | Every time someone says "this time it's different" (e.g.
        | 1998 internet bubble, 2007 housing bubble, 2020 crypto
        | bubble, etc) time proves that this time was not really
        | that different.
 
        | jsight wrote:
        | But the internet did change things? Even crypto is
        | debatable. BTC still exists and still has pretty high
        | value, just not as high as at its peak.
        | 
        | TBH, I'm not sure how to quantify housing bubbles either.
        | I'd bet most of the country has much higher home prices
        | now than in 2007. I bet they were higher than 2007 in
        | most places and most years between then and now too.
 
        | civilitty wrote:
        | _> (e.g. 1998 internet bubble, 2007 housing bubble, 2020
        | crypto bubble, etc)_
        | 
        | That's some extreme cherry picking.
        | 
        | During that time period, the internet and smartphones
        | alone have completely changed society (for better and
        | worse) in the span of only three decades, despite the
        | former going causing a minor economic crash in its
        | infancy.
        | 
        | Almost everything _is_ different except human nature. The
        | scammers are innovating just like everyone else.
 
        | goatlover wrote:
        | When someone says a technology completely changed
        | society, I think of the hypothetical singularity that
        | Kurzweil and company predict, where it's basically
        | impossible for us to predict what the future looks like
        | after. But when you look back at the world before the
        | rise of the web and then smartphones, it's just taking
        | preexisting technologies and making them available in
        | more mobile formats. TV, radio, satellite and computers
        | existed before then (1968 mother of all demos had word
        | processing, hypertext, networking, online video). And
        | some people did more or less foresee what we've done
        | online since.
        | 
        | We still burn fossil fuels to a large extent, still drive
        | but not fly cars, still live on Earth not in space, still
        | die of the same causes, etc.
        | 
        | I watch a long cargo train that looks like it's form the
        | 80s go by and wonder how much the internet changed cargo
        | hauling. I'm sure with the logistics the internet made
        | things a lot more efficient, but the actual hauling is
        | not much different. It's not like we teleport things
        | around now. You can order online instead of out of a
        | catalog, but brick stores remain. You can read digital
        | books, but still plenty of printed materials, bookstores,
        | libraries.
 
        | JohnFen wrote:
        | > the internet and smartphones alone have completely
        | changed society
        | 
        | I honestly think this overstates the case pretty
        | severely. They have certainly caused societal change, but
        | from what I can see, society as a whole is not actually
        | all that different from what it was before all of that.
 
        | vikramkr wrote:
        | Well, of those 3 the 1998 internet bubble actually was
        | different and modern society actually was fundamentally
        | changed by the technology in question, so idk if that's
        | the best counterargument. The other two, sure yeah those
        | amounted to essentially nothing. But there have been
        | plenty of bubbles where the concept underlying the bubble
        | actually did have large societal impacts even if the
        | investors all lost money, like tulipmania with futures
        | contracts and railway mania with trains
 
        | [deleted]
 
        | p1esk wrote:
        | I don't remember much happening with NNs in 1980. There
        | was a lot of hype in 1992-1998 though.
 
        | rsynnott wrote:
        | There was also a voice recognition thing in the 90s, the
        | whole self-driving car/computer vision thing early to mid
        | last decade, and a _very_ short-lived period when
        | everyone was a chatbot startup in 2016 (I think Microsoft
        | Tay just poured so much cold water over this that it died
        | almost immediately).
 
        | alex1212 wrote:
        | Spot on, 2009 with Watson was my first. Oh, the
        | memories...It was nowhere near as nuts as this one, at
        | least in my head.
 
      | paulddraper wrote:
      | Machine learning became quite the buzzword in 2016(?)
 
        | alex1212 wrote:
        | I definitely remember it being the "hot thing" in the ad
        | tech space
 
      | ben_w wrote:
      | I can remember hyped up news after Watson; personally I was
      | hyped up after Creatures (though in my defence I was a
      | teenager and hadn't really encountered non-fictional AI
      | before); before those there was famously the AI winter,
      | following hype that turned out to be unreasonable.
 
    | jvanderbot wrote:
    | Fourth here, if you count dot com, early robotics cum self
    | driving cars, and web 3. Each had their impact, winners and
    | vast array of losers.
 
| dadoomer wrote:
| According to the article 38% see a correction in the near-ish
| future.
| 
| Also,
| 
| > Unlike during the dot-com bubble of the 2000s, AI isn't
| entirely based on speculation.
| 
| I'd say the dot-com bubble was backed by a revolutionary product:
| the Internet. That doesn't change that expectations were too
| high.
 
  | sebzim4500 wrote:
  | Were expectations too high?
  | 
  | Some of the companies involved are now worth trillions.
 
    | rsynnott wrote:
    | Amazon, okay, but who else? Nearly all of the big .com-era
    | startups (and major non-startup beneficiaries, like Sun) are
    | _gone_. Yahoo somehow still exists, I suppose.
    | 
    | I suppose you could argue Google, but it's an odd one; it was
    | right at the tail end, and was really only taking off as
    | everything else was collapsing
 
      | hollerith wrote:
      | I would argue Google because the dot-com bubble did not
      | burst till 2000, and Google was founded in 1998.
 
    | robryan wrote:
    | Probably the same with AI, the short term hype cycle being
    | too early for the vast majority of companies.
 
| dudeinhawaii wrote:
| You really have to have your own existing moat for AI to augment
| (ala Adobe, Microsoft, etc). Anything built directly on AI can be
| replicated rather quickly once someone figures out what
| combination of prompt + extra data was used.
| 
| That said, you don't have to be the mega players to have an
| existing small moat. If your product does something great
| already, you get to improve it and add value for users very
| quickly. That's been my experience anyway.
 
  | caesil wrote:
  | "once someone figures out what combination of prompt + extra
  | data was used"
  | 
  | This is assuming your thing is one call to GPT-n rather than a
  | complex app with many LLM-core functions, and it also assumes
  | that data is easy to get.
 
| spamizbad wrote:
| For people who are in AI companies or have heard their pitches:
| What's the typical response to "What makes your AI special that
| can't be replicated by a dozen competitors?"
 
  | paulddraper wrote:
  | Everything can be replicated with time and money.
  | 
  | All the usual things.
  | 
  | First mover
  | 
  | Features
  | 
  | Integrations
  | 
  | Platform synergies
 
  | dgb23 wrote:
  | A lot of it is UX and molding things to specific domains and
  | use cases.
  | 
  | Just like forms over SQL, there seems to be a never ending
  | demand.
 
    | claytonjy wrote:
    | Is Jasper a counterexample? Good UX, domain-specific, but
    | still a standalone ChatGPT wrapper forced to do layoffs
    | because they have no moat.
 
      | alex1212 wrote:
      | From memory, they raised a huge round just before chat gpt
      | went viral. Not sure if they would have been able to do so
      | well if they were raising now. Very much doubt it.
 
  | yujian wrote:
  | I work on Milvus at Zilliz and we encounter people working on
  | LLM companies or frameworks often, I don't ask this question a
  | lot a lot, but it looks like at the moment many companies don't
  | have a real moat, they are just building as fast as they can
  | and using talent/execution/funding as their moat
  | 
  | I've also heard some companies that build the LLMs say that
  | those LLMs are their moat, the time, money, and research that
  | goes into them is high
 
  | version_five wrote:
  | I've sold various "AI" consulting projects, I tell people that
  | all the AI hard- tech is open source and that there's nothing
  | that differentiates it. What is different is implementation
  | experience and industry customization. For example everyone has
  | datasets scraped from the internet, but there are not deep
  | application specific datasets publicly available. Likewise
  | experience with the workflows in an industry.
  | 
  | It's just software, there's little "secret sauce" in the
  | engineering, it's the knowledge of the customer problem that's
  | the differentiator.
 
    | PaulHoule wrote:
    | I worked at a place where we thought there was value into
    | putting it together in one neat package with a bow.
    | 
    | That is, a lot of people are thinking at the level of "let's
    | build a model" but for a business you will need to build a
    | model and then update it repeatedly with new data as the
    | world changes and your requirement changes.
    | 
    | There would be a lot to say for a solution that includes
    | tools for managing training sets, foundation models, training
    | and evaluation, packages stuff up for inference in a
    | repeatable way, etc.
    | 
    | One trouble though is that you have to make about 20
    | decisions or so about how you do those things and developing
    | that kind of framework people get some of them wrong and it
    | will drive you crazy because other people will make different
    | wrong decisions than you will. (To take an example, look at
    | the model selection tools in scikit-learn and huggingface.
    | Both of these are pretty good for certain things but they
    | don't work together and both have serious flaws... And don't
    | get me started with all the people who are hung up on F1 when
    | they really should be using AUC...)
    | 
    | So given the choice of (a) building out something half baked
    | vs (b) fighting with various deficiencies in a packaged
    | system, you can't blame people for picking (a) and "Just
    | doing it". (Funny enough I always told people at that startup
    | that we'd get bought by one of our customers, I thought it
    | was going to be a big four accounting firm, a big telecom, or
    | an international aerospace firm but... it turned out to be a
    | famous shoe and clothing brand.)
 
  | alex1212 wrote:
  | We focus on "selling" the market size, customer problem-
  | solution fit and not so much the AI part. AI is just the means
  | to an end, a better way to solve the problem that we are
  | solving. I saw some interesting stats the other day that the
  | majority of investments in AI focus on infrastructure
  | (databases etc) and foundational models.
 
  | chriskanan wrote:
  | Having large amounts of curated data that is hard to procure,
  | e.g., medical imaging data.
  | 
  | If one can scrape the data from the web, I can't imagine having
  | much of a moat or selling point.
 
    | alex1212 wrote:
    | Its tricky because by definition the more use-case specific
    | the data the harder to obtain at scale, with some exceptions.
 
  | claytonjy wrote:
  | 1. research talent. There's not actually that many people in
  | the world that can adequately fine-tune a large cutting-edge
  | model, and far fewer that can explore less mainstream paths to
  | produce value from models. Only way to get good researchers is
  | to have name-brand leaders, like a top ML professor.
  | 
  | 2. data. Can't do anything custom without good training data!
  | How to get this varies widely across industry. Partnerships
  | with established non-tech companies are a common path, which
  | tend to rely on the network and background of founders.
  | 
  | Even with both those things it's not easy to outcompete a
  | large, motivated company in the same space, like a FAANG. They
  | have the researchers, they have the data and partnerships, so
  | the way to beat them is to move quickly and hope their A- and
  | B-teams are working on something else.
 
    | bugglebeetle wrote:
    | > There's not actually that many people in the world that can
    | adequately fine-tune a large cutting-edge model
    | 
    | If you know how to run a Python script, you can fine-tune a
    | LLama model:
    | 
    | https://huggingface.co/blog/llama2#fine-tuning-with-peft
 
      | polygamous_bat wrote:
      | That's roughly akin to saying "if you have a wrench, you
      | can fix a car", and posting a link to a YouTube tutorial
      | with it.
 
        | bugglebeetle wrote:
        | No, not really. The script does the vast majority of the
        | work. The only challenges here would be knowing how to
        | use Google Colab and formatting/splitting your training
        | and test data. That's the computer science equivalent of
        | adding wiper fluid to your car.
 
        | claytonjy wrote:
        | It's true we've made this easy, and that's awesome, but
        | this is not what AI startups do; this is what other
        | companies experimenting with AI do.
 
        | bugglebeetle wrote:
        | Ok, but that's goalpost shifting. We've gone from there
        | are not many people who can fine-tune a model
        | (demonstrably untrue) to there are not many people who
        | can do {???} that AI startups do. It's unclear what is
        | this special AI startup thing you're referring to, but
        | given that various fine-tuning strategies, like QLORA,
        | emerged out of the open source community, this also seems
        | unlikely to be true.
 
        | claytonjy wrote:
        | Yeah, that's fair. I could have been more precise about
        | what an advantage research talent can be.
        | 
        | As an example, the startup-employed AI researchers I know
        | had already PEFT'd llama2 within a day or two of the
        | weights being out, determined that wasn't good enough for
        | their needs, and began a deeper fine tuning effort.
        | That's not something I can do, nor can most people, and
        | it's a serious competitive advantage for those who can.
        | It's a rather different interpretation of "can adequately
        | fine-tune" than "can follow a tutorial".
        | 
        | When I think "AI startup", I think of the places where
        | these people work. I don't think there's many of those
        | people, and I think their presence is a big competitive
        | advantage for their employers.
 
        | bugglebeetle wrote:
        | Understood. Apologies, I wasn't trying to be combative. I
        | agree that what you describe requires a special emphasis
        | on AI stuff or at least a part of the org that has a
        | research focus. I work on an R&D team at a legacy org and
        | we do the latter.
 
    | alex1212 wrote:
    | Research / talent is how Mistral was able to justify its
    | valuation at 3 weeks old. Pre product, pre anything.
 
      | claytonjy wrote:
      | Yes, totally. Not something anyone reading this is going to
      | replicate!
 
        | Palmik wrote:
        | > Not something anyone reading this is going to
        | replicate!
        | 
        | They were L7/L6 ML researchers/eng at FAANG, I'd bet
        | there are quite a few people like that lurking here.
 
        | claytonjy wrote:
        | I think there's rather more to it than that. Two of these
        | guys are on the Llama paper; the hype and momentum from
        | that is surely responsible for a huge chunk of their
        | valuation. If you take the big LLM-relevant papers, most
        | of the folks with this kind of profile are already off
        | doing some kind of startup.
        | 
        | The Mistral folks have impeccable timing, but are leaving
        | FAANG somewhat late compared to their peers.
 
        | Palmik wrote:
        | Definitely. Just to be clear, my comment wasn't to
        | diminish the Mistral folks, they are certainly a very
        | impressive group, but rather to contest your implication
        | about the audience here.
 
        | alex1212 wrote:
        | Interestingly enough I think there is lack of talent on
        | the investment side of things too. Very few investors
        | have the right skillsets in their teams to be able to do
        | deep technical due diligence required for true AI
        | solutions.
 
  | rvz wrote:
  | > "What makes your AI special that can't be replicated by a
  | dozen competitors?"
  | 
  | As you can see with all the responses here, they have failed to
  | realize that this is a trick question.
  | 
  | The real answer is that _none_ are special and can be
  | replicated by tons of competitors.
 
  | PaulHoule wrote:
  | (1) In the current environment things are moving so fast that
  | the model of "get VC funding", "hire up a team", "talk to
  | customers", "find product market fit" is just not fast enough.
  | 
  | Contrast that how quickly Adobe rolled out Generative Fill, a
  | product that will keep people subscribed to Photoshop. (e.g. it
  | changed my photography practice in that now I can quickly
  | remove power lines, draw an extra row of bricks, etc. I don't
  | do "AI art" but I now have a buddy that helps retouch photos
  | while keeping it real)
  | 
  | If they went and screwed around with some startup they'd add
  | six months to a project like that unless it was absolutely in
  | the place where they needed to be.
  | 
  | (2) If you were like Pinecone and working on this stuff before
  | it was cool you might be a somebody but if you just got into
  | A.I. because it was hot, or if you pivoted from "blockchain" or
  | if you've ever said both of those things in one sentence I am
  | sorry but you are a nobody, you are somebody behind the curve
  | not ahead of the curve.
  | 
  | (3) I've worked for startups and done business development in
  | this area years before it was cool and I can say it is tough.
 
    | lolinder wrote:
    | #1 is a really interesting point. Traditionally startups have
    | a velocity advantage over the big companies because they
    | don't have all the red tape, but in AI the big companies seem
    | to have the advantage. The amount of data you need for
    | training and the compute resources required means that
    | startups are stuck with APIs that someone else provides, but
    | a giant company like Adobe can train their own very quickly
    | just based on the research papers that are out there and
    | their own data.
 
      | PaulHoule wrote:
      | Some of it is that a VC-based company that just got funding
      | today is not going to have any product _at all_ for at
      | least six months or a year if not longer.
      | 
      | Somebody who needs a system built for their business right
      | now gains very little talking to them.
      | 
      | If a startup is a year or two post funding it might really
      | have something to offer, but the huge crop of A.I. startups
      | funded in the last six months have missed the bus.
      | 
      | Big co's can frequently move very fast when there is a lot
      | on the line.
 
  | paxys wrote:
  | This isn't unique to AI. If you are hesitant to invest in
  | startups because their products could be duplicated by
  | competitors/big tech then you should not be a VC at all.
 
___________________________________________________________________
(page generated 2023-07-31 23:01 UTC)