|
| minimaxir wrote:
| > Credit where due: friend of WCT Dennis wrote the script to feed
| Water Cooler Trivia questions to GPT-3 and access the OpenAI API.
|
| It should be noted it's against OpenAI's rules to share access to
| GPT-3, although they've been inconsistent about enforcing it.
| cowllin wrote:
| We gave him a spreadsheet of questions and he gave us a
| spreadsheet of GPT-3 responses, so probably not actually shared
| access :)
|
| But point well-taken!
| sinned wrote:
| And yah, I hope I didn't violate their terms by doing this
| --- it seemed like a fun thing to do!
| agravier wrote:
| > 2. Clues confuse GPT-3.
|
| They should probably have been removed. This gives me the overall
| impression that the testers treat GPT-3 a bit too much as
| something like an artificial human, and not enough like an
| algorithm (which will work better with sanitized input). This is
| not a major criticism, the experiment is still interesting.
|
| Could it be that the marketing from OpenAI it to blame? From the
| OpenAI front page:
|
| > Discovering and enacting the path to safe artificial general
| intelligence.
|
| > Our first-of-its-kind API can be applied to any language task,
| and currently serves millions of production requests each day.
|
| Does that seem misleading?
| cowllin wrote:
| I thought about removing those two-word clues but ultimately,
| they exist for users so I wanted to be closer to apples-to-
| apples!
| minimaxir wrote:
| _OpenAI 's_ marketing has been fair, and it's not misleading
| (GPT-3 can be _applied_ to any language tasks but your mileage
| will vary as this submission demonstrates).
|
| However, OpenAI's endorsement of the demos amplifying and
| anthropomorphizing GPT-3 as a sentient mind don't help, and
| it's been disappointing that OpenAI doesn't really push back on
| that. (the hype is what prompted my rebuttal on GPT-3
| expectations, which still holds up:
| https://news.ycombinator.com/item?id=23891226)
|
| I do believe the future of AI text generation is more bespoke,
| algorithmic-input-friendly models which is what I've been
| working on as my side project.
| wyldfire wrote:
| "Two'fer Goofer", "Tough Training" - why would GPT-3 give the
| question back as its response in these cases?
___________________________________________________________________
(page generated 2021-03-12 23:00 UTC) |