Tuesday, 4 April 2023

Lord Help Us After They Hooked ChatGPT Up to a Furby

quote [ Programmer Jessica Card has hooked up OpenAI's chatbot ChatGPT to a stripped-down Furby. And the results are just as hair-raising as you'd expect.

So there we have it. Is this truly the end of days as we know it? The little critters are clearly capable of some pretty horrible stuff. Remember when they were banned by the NSA back in the 90s? ]

For someone ignoring the subject/field.

What makes GTP-4 better then a glorified early 2000's Chat Bot that's been prettied up and give access to multithreads and computation power?

[SFW] [Virtual & Augmented Reality] [+3]
[by R1Xhard]
<-- Entry / Comment History

steele said @ 7:31pm GMT on 5th April
What makes GTP-4 better then a glorified early 2000's Chat Bot that's been prettied up and give access to multithreads and computation power?

Eliza and them where formulaic bots. Stuff like if (noun) then ask "How does (noun) make you feel?"

Large Language Models are more like statistical engines. You feed a shitload of data into them and it creates something called a graph which is basically function that's the average of all the data. Then when you give the Model an input, it outputs a statistically likely output for what all that data would result in. To put it simply, they're like very large, very contextual autocomplete, but they're still very impressive as at their most basic they still are very similar in function to human thought.



steele said @ 7:34pm GMT on 5th April
What makes GTP-4 better then a glorified early 2000's Chat Bot that's been prettied up and give access to multithreads and computation power?

Eliza and them where formulaic bots. Stuff like if (noun) then ask "How does (noun) make you feel?"

Large Language Models are more like statistical engines. You feed a shitload of data into them and it creates something called a graph which is basically function that's the average of all the data. Then when you give the Model an input, it outputs a statistically likely output for what all that data would result in. To put it simply, they're like very large, very contextual autocomplete, but they're still very impressive as at their most basic they still are very similar in function to human thought. Which doesn't man they're thinking, but they're certainly going to introduce some interesting debates in the future as we do things like give them internal dialogues... or bodies.




<-- Entry / Current Comment
steele said @ 7:31pm GMT on 5th April [Score:1 Informative]
What makes GTP-4 better then a glorified early 2000's Chat Bot that's been prettied up and give access to multithreads and computation power?

Eliza and them where formulaic bots. Stuff like if (noun) then ask "How does (noun) make you feel?"

Large Language Models are more like statistical engines. You feed a shitload of data into them and it creates something called a graph which is basically function that's the average of all the data. Then when you give the Model an input, it outputs a statistically likely output for what all that data would result in. To put it simply, they're like very large, very contextual autocomplete, but they're still very impressive as at their most basic they still are very similar in function to human thought. Which doesn't man they're thinking, but they're certainly going to introduce some interesting debates in the future as we do things like give them internal dialogues... or bodies.





Posts of Import
Karma
SE v2 Closed BETA
First Post
Subscriptions and Things

Karma Rankings
ScoobySnacks
HoZay
Paracetamol
lilmookieesquire
Ankylosaur