Tuesday, 4 April 2023

Lord Help Us After They Hooked ChatGPT Up to a Furby

quote [ Programmer Jessica Card has hooked up OpenAI's chatbot ChatGPT to a stripped-down Furby. And the results are just as hair-raising as you'd expect.

So there we have it. Is this truly the end of days as we know it? The little critters are clearly capable of some pretty horrible stuff. Remember when they were banned by the NSA back in the 90s? ]

For someone ignoring the subject/field.

What makes GTP-4 better then a glorified early 2000's Chat Bot that's been prettied up and give access to multithreads and computation power?

[SFW] [Virtual & Augmented Reality] [+3]
[by R1Xhard@7:59amGMT]

Comments

mechanical contrivance said @ 2:35pm GMT on 4th Apr [Score:2]
Now make it Nixon's head in a jar.
gendo666 said @ 8:28pm GMT on 4th Apr [Score:1 Underrated]
Well now we have looked into the face of the enemy.
steele said[1] @ 7:31pm GMT on 5th Apr [Score:1 Informative]
What makes GTP-4 better then a glorified early 2000's Chat Bot that's been prettied up and give access to multithreads and computation power?

Eliza and them where formulaic bots. Stuff like if (noun) then ask "How does (noun) make you feel?"

Large Language Models are more like statistical engines. You feed a shitload of data into them and it creates something called a graph which is basically function that's the average of all the data. Then when you give the Model an input, it outputs a statistically likely output for what all that data would result in. To put it simply, they're like very large, very contextual autocomplete, but they're still very impressive as at their most basic they still are very similar in function to human thought. Which doesn't man they're thinking, but they're certainly going to introduce some interesting debates in the future as we do things like give them internal dialogues... or bodies.

R1Xhard said @ 11:15pm GMT on 5th Apr
I picture it like a bell graph, answers are the weighted averages combined.

Having a feature where it presents 3 answer would be an interesting novel take.

Like a weighted averaged answer, take the answer from the Dunce end and the Acute end and present it's different responses.

Guess that would be more a testers domain then the public facing oracle interface.

steele said @ 12:30am GMT on 6th Apr


Net really is an apt term for it. each of those lines is a weighted variable that shifts during training so the "average" is kind of spread across the whole net using a technique called Gradient Descent.

They do tend to include a "seed" variable that's a random number that can be used to pull the same exact answer out or to get alternate answers. So like when you hit 'regenerate response' on chatgpt, what it's doing is resubmitting the same question but with a different seed.
R1Xhard said[1] @ 11:45am GMT on 6th Apr [Score:1 laz0r]
I always thunk of it more like this personally.

That was 18 year old me thoughts.

steele said @ 7:35pm GMT on 5th Apr [Score:1 Underrated]
yogi said @ 9:32pm GMT on 5th Apr [Score:1 Sad]
I tried signing up for ChatGPT and couldn't! 😢

What did I do wrong? What website do I use? Are the prejudiced against Jews? 🤣
R1Xhard said[2] @ 11:07pm GMT on 5th Apr
https://help.openai.com/en/articles/6825453-chatgpt-release-notes

If you have a legacy Gmail that seems to slip right. Otherwise the tri a l, I registered months ago?

yogi said @ 8:29pm GMT on 17th Apr
Thanks. Through repeated assays, I finally succeeded!

Post a comment
[note: if you are replying to a specific comment, then click the reply link on that comment instead]

You must be logged in to comment on posts.



Posts of Import
Karma
SE v2 Closed BETA
First Post
Subscriptions and Things

Karma Rankings
ScoobySnacks
HoZay
Paracetamol
lilmookieesquire
Ankylosaur