Pijul - Alternative to GitLab, Forgejo, Gitea, Gogs & Github
-
@LoudLemur said in Pijul - Alternative to GitLab, Gitea, Gogs & Github:
Also, what an amazing answer from ChatGPT!
I wonder if it's true or it made it up!
-
@jdaviescoates said in Pijul - Alternative to GitLab, Gitea, Gogs & Github:
@LoudLemur said in Pijul - Alternative to GitLab, Gitea, Gogs & Github:
Also, what an amazing answer from ChatGPT!
I wonder if it's true or it made it up!
You probably know the funny term they have for "making up" things, they say the AI "hallucinated" it!
-
@robi said in Pijul - Alternative to GitLab, Gitea, Gogs & Github:
@LoudLemur yes, which makes it quite useful in dealing with deliberate misinformation as it can uncover truth by similar means!
Could you please explain that a bit more, maybe with an example?
-
@LoudLemur consider the , representable in a similar way by true and false. They are intertwined.
Since the system can make connection across all kinds of data, true and false, and we assume (poorly so) that our data is a source of truth, we get some "hallucinations" when the connections made are to things that don't line up with expectations or reality (as we think know it.)
Hence, when the system is running over mostly false data or misinformation, it can also make connections that are "hallucinations", which actually correspond to truth, even though it doesn't line up with the expected false narrative or reality that we're being given (as we think it is).
HTH
-
-
@robi said in Pijul - Alternative to GitLab, Forgejo, Gitea, Gogs & Github:
it can also make connections that are "hallucinations", which actually correspond to truth,
Erm, as I'm sure you know really, most AI "hallucinations" are when they just make random shit up that sounds plausible but doesn't actually correspond to any truth at all.
-
@jdaviescoates No, sorry. It is never random.
It always connects to something (it's linear algebra), however most of the time people have no idea where or why because of the black-box nature of the models.
That is in a previous life I chose to work on AI algorithms that produce transparent models and are fully explainable.
We could even explain black-box models!
-
@robi said in Pijul - Alternative to GitLab, Forgejo, Gitea, Gogs & Github:
It is never random.
True, not random, it can only come from their models, what they "know".
But it is very often completely and utterly wrong and factually incorrect.
-
@jdaviescoates yes, it's an associative machine with lots of repeated data which uses probabilities to pick what the next best association is.
Just like your memory, it's faulty and fades unless you do very responsible things to make sure it isn't faulty. Validate correctness.
Why they'd choose to do this in binary compute systems is beyond stupid. Like asking your storage system to retrieve a specific file and it picks another probabilistic one instead. You'd be furious! Just find the right file!
The algos need to change, and be used in the appropriate places.
Lossy is ok in certain applications, but not where precision is needed. So don't expect it in LLMs as they're designed that way.
We have too many limiting beliefs that need correcting about all sorts of things.
Pushing for precision in a lossy system is upside down.