#LLM
One interesting aspect about #bitcoin is that it has allowed some people to invest in a technology, rather than some particular company (btw this doesn’t apply to many tokens or alt coins, which frequently act as unsecured securities).
Could this logic be applied to other tech somehow? It would be interesting to invest, for instance, in #LLM, rather than #OpenAI.
There are of course funds, but they are still investing in companies (and also are frequently closed to general public).
#AI & empty promises
- source Poorly Drawn Lines comic strip.

This is a fairly amazing demo¹ of the Notable² plugin for #ChatGPT — with a single prompt and a data source, the #LLM generates a robust #Jupyter notebook analyzing the data set and answering questions based on a "user persona". It even shows when the system got some code wrong, and how the user fixed it. There's a write-up³ too, which is nice since people get cranky about video tutorials lately.
¹ https://www.youtube.com/watch?v=lpJ6kQH2BgM
² https://noteable.io
³ https://medium.com/noteableio/revolutionize-how-you-make-data-driven-decisions-with-noteables-chatgpt-plugin-23977b792b11
A very interesting paper on Anthropomorphism and "AI" by Abercrombie, Curry, Dinkar and @zeerak.
https://arxiv.org/abs/2305.09800
PDF: https://arxiv.org/pdf/2305.09800
Via @emilymbender
The issues discussed apply to all kinds of dialogue capable machines, not only to Large Language Models.
The best summary of state-of-the-art knowledge about Large Language Models I've seen so far (I've seen many 😉)
https://www.youtube.com/watch?v=bZQun8Y4L2A
@barubary @randomgeek Yep. See https://www.theverge.com/2013/8/6/4594482/xerox-copiers-randomly-replacing-numbers-in-documents
Full account of #Xerox copiers mangling numbers here: https://www.dkriesel.com/en/blog/2013/0802_xerox-workcentres_are_switching_written_numbers_when_scanning
They used the #JBIG2 image format’s lossy “pattern matching & substitution” method that substitutes previously-encoded characters if they look enough like the one currently being encoded.
Try giving a #llm software instructions to top this legendary post title, brought to you by REALITY via @web3isgreat@indieweb.social
RE: https://indieweb.social/users/web3isgreat/statuses/110453062414769816
A simple example of the unreliability of ChatGPT #LLM.
Me: rewrite "Try not to get stuck." as an active sentence.
ChatGPT: ”Make an effort to avoid getting stuck."
Me: That's still a passive sentence!
ChatGPT: Apologies for that oversight. Here's an active sentence version:
"Strive to avoid getting stuck."
Me: Still passive!
/1
Lots of Prompt Engineering classes out there & relatively little talk about the dangers of inserting untrusted data into LLM prompts.
Feels like the 90s and everyone builds vulnerable systems at scales.
https://www.youtube.com/watch?v=Fz4un08Ehe8
#redteam #aiinjection #LLM #PenTest #openai #ml #langchain #infosec
The imminent threat of IP theft by some of the most powerful corporations in the world has, I believe, given me reason to remove the vast majority of my content from the Web. I have done so with a heavy heart, for I want to share the folktales and fairy tales and legends of Norway with as many people as possible. What I refuse to do, however, is work for the enrichment of the oligarchs at Google, Microsoft, OpenAI, etc. without (morally) decent compensation for my labour, or my explicit consent.
The Web has become a little less interesting today – blame the putrid corporate vultures.
Generative AI and large language models: background and contexts [Lorcan Dempsey] https://www.lorcandempsey.net/intro-gen-ai/
>The promise and challenge of Generative AI is now central. This is a summary overview of some of the major directions and issues, acknowledging that things are moving very quickly. I intend it as background to later posts about library implications and developments.
1/2
#LLM #chatGPT (#libraries) #competition #research #ethics #ChUI #copyright #Environment

AIs can do a lot of tasks, some of which are pretty scary to creatives and knowledge workers.
But the one that gives me the most immediate "oh god" reaction is when social media execs talk about "automated moderation."
That's not likely to put me out of a job, but as a community manager who values professionalism, the messes that would create are pretty [curse word here].
chatGPT and the Velcro Test

ChatGPT est basé sur un modèle entrainé sur un corpus de texte pour prédire et générer une suite de mots.
L'enjeu actuel est de pouvoir guider les moteurs LLM (Large Language Model) pour rendre plus fiable les résultats. Ces modèles sont entrainés uniquement avec du texte et donc n'ont aucune connaissance physique de la réalité. Le prochain défi est de pouvoir faire la même chose avec de la vidéo pour pouvoir analyser le mouvement. https://usbeketrica.com/fr/article/d-ici-cinq-ans-plus-personne-n-utilisera-un-modele-tel-que-chatgpt #veille #presse #ia #llm
Bringing the power of large language models (LLMs) to Microsoft Teams apps just got easier.
Join Joey Glocke, Steve Ickman and Ayca Bas to learn about the new Teams AI library that we just announced in public preview at Microsoft Build.
Watch: https://youtu.be/iEWEofW8w_0

ChatGPT v. The Legal System: Why Trusting ChatGPT Gets You Sanctioned - Recently, an amusing anecdote made the news headlines pertaining to the use of Cha... - https://hackaday.com/2023/05/29/chatgpt-v-the-legal-system-why-trusting-chatgpt-gets-you-sanctioned/ #artificialintelligence #largelanguagemodel #currentevents #featured #chatgpt #gpt-4 #llm
Not a big fan of this.
This is taking jobs from writers.
The only possible solution is if the writers write every possible thing the AI could say and train the LLM against that.
But, still...
Watch this Nvidia demo and imagine actually speaking to AI game characters https://www.theverge.com/2023/5/28/23740908/nvidia-ace-demo-voice-ai-npc-game-characters
#Nvidia #AI #LLM #Computex #GamingNews #EpicGames #UnrealEngine5 #MetaHuman

Has anyone created an LLM integration to create playlists?
My Wikipedia article of the day led me to a new artist (Summer Camp) and record label (Moshi Moshi). I used to do this manually but some automation would be cool.
It’s nothing to the people who are fighting for their livelihoods right now, but the other day I wrote a filk on a whim for a friend, and realised that I am never again going to have the joy of amusing someone with some silly pastiche sonnet or song I wrote without them thinking I had simply asked an LLM to write it for me, instead of the time I spent finding just the right way to put it together.
Again, it’s a small thing given there are creative professionals with hard-won skills being undercut by unthinking use of generative AI. But damn if some spark in my soul didn’t just sputter out upon that realisation.
The misinformation nightmare has begun. Current generation of #ai tools already poses existential risks.
“A filing in a case against Colombian airline Avianca cited six cases that don’t exist, but a lawyer working for the plaintiff told the court ChatGPT said they were real”
#ai #artificialintelligence #llm #llms #chatbot #chatbots #eu #europe #aiact #ethicalai @machinelearning
https://www.theverge.com/2023/5/27/23739913/chatgpt-ai-lawsuit-avianca-airlines-chatbot-research
Prompt injection: What’s the worst that can happen?
Simon Willison’s Weblog
https://simonwillison.net/2023/Apr/14/worst-that-can-happen/#data-exfiltration
Neil Clarke (editor of Clarkesworld) has a statement on AI in publishing. I don’t agree with all of it, but I appreciate the precision and clarity of these thoughts. It’s so much better than most of the discourse.
Thought-provoking article.
“Our societywide obsession with speed and efficiency has given us a flawed model of human cognition..”
“…the dangers of overautomating cognitive and creative processes are real”
#ai #llm #machinelearning #chatbot
https://www.nytimes.com/2023/05/28/opinion/artificial-intelligence-thinking-minds-concentration.html
世界的参数倒影:为何GPT通过Next Token Prediction可以产生智能 - 知乎
https://zhuanlan.zhihu.com/p/632795115
More importantly, why are we being nice to #LLM
"Being nice" is reserved for humans.
https://infosec.exchange/@SwiftOnSecurity/110448234660362907

Today’s AI News & Comment
‘They’re afraid their AIs will come for them’: Doug Rushkoff on why tech billionaires are in escape mode
https://www.theguardian.com/technology/2023/may/28/artificial-intelligence-doug-rushkoff-tech-billionaires-escape-mode
AI Is Changing How Americans Find Jobs and Get Promoted
https://gizmodo.com/chatgpt-jobs-ai-how-americans-find-jobs-get-promoted-1850481233
Is Generative AI Bad for the Environment?
https://gizmodo.com/chatgpt-app-ai-is-generative-ai-bad-for-the-environment-1850481190
Beyond the ‘Matrix’ Theory of the Mind
https://www.nytimes.com/2023/05/28/opinion/artificial-intelligence-thinking-minds-concentration.html
What Is an AI 'Black Box'?
https://gizmodo.com/chatgpt-app-what-is-an-ai-black-box-1850481273
Get ChatGPT Access on Mac OS 9 with LegacyAI
https://osxdaily.com/2023/05/28/get-chatgpt-access-on-mac-os-9-with-legacyai/
#AI #ArtificialIntelligence #ML #MachineLearning #LLM #ChatGPT #BingChat #SwarmsOfRampantAIChatbots
Every time I see, “As an AI language model, I am,” I think we need a new first person pronoun like how old French has that second person pronoun just for dehumanizing servants. #LLM
These LLM text-generation apps remind me of the "wrong answers only" internet game:
"Medical transcription of my last doctor's appointment. Wrong answers only."
"Legal document/brief/citation. Wrong answers only."
"Totally not plagiarized essay/speech/doctoral dissertation. Wrong answers only."
...and on and on and on...
What could go wrong?
Got generative #AI #geology field trip stop descriptions working on https://app-lagdal.azurewebsites.net/ if anyone wants to try themselves.
Grounds #llm text predictions with Macrostrat geologic map API data using a #langchain agent.
Synced geologic and topographic map + StreetView photos help select a lat/long point with a rock outcrop to use as starting input for the text output.
#openai key used has a fixed budget so might run out eventually.
Wrote a Fogknife post about my first experience using an #LLM—in this case, #Bard—as a programming assistant.
https://fogknife.com/2023-05-28-rubber-ducking-it-with-an-llm.html
Lawyer cites fake cases invented by ChatGPT, judge is not amused:
con TRIDENT GeoAI - https://trident.yuiseki.net/geoai - puoi fare interrogazioni in linguaggio naturale per ottenere i dati di #openstreetmap
Nel video i barbecue pubblici e le fontanelle dell'acqua che si trovano a Trento.
#AI #LLM #bot #geospatial #query

We will not publish anything “written” by “AI” because we don’t see the point. Through stories we humans share our experiences, ideas, dreams. Through stories we can live outside of ourselves and come to know one another a little better. Through stories we make sense of the world. None of that can sprout from a generative bullshitter program.
It’s sad that this even needs to be stated, but as our current Mastodon instance ends in “.ai” … well, we have spoken.
@RanaldClouston @buzz https://betterprogramming.pub/large-language-models-dont-hallucinate-b9bdfa202edf
#bulshitting #LLM #ChatGPT #AI
“If not “hallucination” or “delusion,” then what term would be appropriate? Despite it representing less than polite language, I actually agree with a number of commentators who suggest “bullshitting” as a term that captures quite accurately what is really going on in these models. Rather than lying, which like a delusion, would require the model to hold some true beliefs about the world, bullshitting suggests the model has no meaningful relationship to either beliefs or truth, which is indeed the state of current models.”
We need a language for discussing the "mind," "subjectivity," "perspective," etc., of machines that doesn't evoke consciousness, sentience, or humanness in any way. An #LLM has certain features that are only recognizable and articulable insofar as they're shared with human subjectivity. (I have lots to say about that actually...). There are no other comparable phenomena, so all we can do is "humanize" them thru descriptive vernacular. That's not good
@davidmortensen @buzz But in the origin "#hallucination" is a medical/psychiatric word, a definition from neuro science. And most of the texts about that definition, at least in media, misuse it for humanising. If we need to define a definition, it's already too complex to be understood outside the science bubble. But fascinating for the other side: What if hearing voices would be only an individual inner #LLM? Would we approach that differently?
Is this a great idea or the worst idea in the history of mankind?
An LLM trained on the techniques outlined in this book:
https://bookwyrm.social/book/428545/s/how-minds-change
And an app that can listen to my conversation and feed the input from, say, a family member I'm engaging into the LLM, then devise the correct response, and speak that into my earbud to tell me what to say next.
#ai #llm @davidmcraney
I used to think everyone is creative in their own way.
Now, with some of the attitudes around LLMs, I can’t tell if some people are truly not creative and hate people who are, or if some people have suppressed their creative impulses too deeply to feel them, and hate people who haven’t.
I once worked with a guy who would expound on all kinds of topics with great confidence and glibness. He sure *sounded* like he knew what he was talking about, and exec staff just *loved* him and kept promoting him and putting him in charge of initiatives and strategies.
Thing is: if you actually knew anything about what he was expounding on, he was utterly and completely full of crap.
When I read another credulous take on #ChatGPT or other #LLM, I think about that guy.
Same
Exact
Energy
Hey #LLM #GenerativeAI experts out there: How the heck do you debug things when your model returns the “wrong” answer? Where do you even start? If I controlled all prompts and the model yields an unexpected result, what can I even do about it?
*ugh* Not this. NEDA has fired all 6 staffers who train volunteers to help people with eating disorders and wants to replace them with a chatbot.
"You Cannot Serve Two Masters: The Harms of Dual Affiliation"
--Ben Recht, David A. Forsyth, and Alexei Efros • Aug 9, 2018
Been cleaning out my #firefox bookmarks. Stumbled upon this post from 2018 about #BigTech raiding universities for AI talent, and what some of the repercussions would likely be. Some choice quotes:
"Facebook would like to have computer science faculty in AI committed to work 80% of their time in industrial jobs and 20% of their time at their university. They call this scheme 'co-employment' or 'dual affiliation.' This model assumes people can slice their time and attention like a computer, but people can’t do this. Universities and companies are communities, each with their particular missions and values. The values of these communities are often at odds, and researchers must choose where their main commitment lies."
"Though big companies belabor promises of complete intellectual freedom to faculty, that can’t and won’t happen because the purpose of companies is to make money for shareholders."
"Like many really dangerous bargains, the harms are diffuse, and the benefits are focused."
Yeah. Talk about being ahead of the curve there, what with how #ChatGPT and #LLM and #AI tech has played out in the intervening five years.
"Generative models are designed to serve users end-to-end through natural language, and often the need for customizing these large models is not addressed due to high fine-tuning costs or proprietary technology. In the absence of expert or fine-tuned #LLM, the applications of such models in [Computational Social Science] remain limited to generic data labeling and processing such as stance detection or sentiment analysis."
https://arxiv.org/pdf/2305.12544.pdf
AI Canon: Excellent roundup of technical articles on modern deep learning AI.
https://a16z.com/2023/05/25/ai-canon/
#deeplearning #chatgpt #llm #ai #+
With all the fuss around 'AI', machine learning and large language models like ChatGPT you might think that the future of 'natural language processing' (NLP, or computers understanding human language) is sorted... these PhD students have compiled a list of things that AI / language models can't do https://arxiv.org/pdf/2305.12544.pdf
And they have a feedback form for additional suggestions: https://docs.google.com/forms/d/e/1FAIpQLScQFbuTJIbJVNBQnX6fg_UgmaIChuakzZsxfkLtM_XbY4cb6Q/viewform
Wie verändert KI das Lernen und Lehren an Hochschulen? Interview mit Prof. Stegemerten und mir in der Le/NI:
https://www.hs-niederrhein.de/fileadmin/dateien/hll/hochschuldidaktik/LeNi-Beilage_in_der_NIU/Le_Ni_Beilage_4_KI_in_der_Hochschullehre_Gekommen_um_zu_bleiben.pdf
#ai #chatgpt #fedilz #fedicampus #llm #gpt #highered #highereducation
I ran my #MaxHeadroom dialogue prompt through the new Guanaco #llm and uh... it's less "fun" and more "fucking brutal".
"I spy with my little eye, a man who is all eyes. A security guard who guards your eyes while he sips his coffee and wonders why he was hired.
You see, I can see through his cheap sunglasses, that his uniform is ill-fitting and reeks of stale body odor. He is the keeper of keys, and the master of surveillance tapes.
A man who patrols the night, and guards the parking lots. His only purpose in life is to write citations and harass the youth.
So, next time you see a rent-a-cop, just remember, he is more than meets the eye. A walking cliché, a well-worn joke. A man with no ambition, and no chance of promotion.
A sad sack of a human being, who once had dreams, but now they are but a distant memory.
A cautionary tale of how life can be when you take the easy way out.
Don’t you feel sorry for him? I do. I mean, why else would he be a rent-a-cop?"
#jsonformer: A Bulletproof Way to Generate Structured JSON from Language Models https://github.com/1rgs/jsonformer
#LLM #json
I got early access to Google's new #AI powered search experience, and I wrote about my first impressions here: https://kalynamarketing.com/blog/google-sge-review
Main thoughts:
I'm impressed. Google created a refreshing implementation of generative AI to search.
SGE never made me feel like it was trying to be more than a search engine, or force a clunky chatbot dynamic on me.
I don't know how practical it will be, but I am pleasantly surprised so far

Feels like combining #llm models with what Semantic Web is doing for a long time with triple stores and ontology modelling would be a good idea to enhance #ml #machinelearing #artificialinteligence #ai algorithms, has that been done already?
@ct_bergstrom @emilymbender Great article, but it is approaching the issue from the wrong angle for me.
I don't need further convincing about how impossible the challenge should have been. Recent #llm reaching their observable capabilities without having somehow 'unlocked' meaning just seems even more outlandish than someone miraculously learning Thai in the example
Wired reports on #LLM prompt injection vulnerabilities and risks: https://www.wired.com/story/chatgpt-prompt-injection-attack-security/ #cybersecurity #ai #chatgpt
Being sceptical about the #LLM hype myself, I'm nevertheless stunned by its very existence.
She writes: "You could maybe even write something that a Thai speaker could make sense of."
That is the true mystery to me. How is this possible even without knowing the language, and to such an extent as we see it? And why are we so easily tricked to believe in #ai ?
"We’re the ones doing all the meaning making there, as we make sense of it."
Sure, but why so - automatically?
Excellent article by @gabenicholas@twitter.com and @AliyaBhatia@twittercom for @CenDemTech via @Wired that talks about #multilingual #LLMs being used for content moderation - such as Facebook's new 4000-language #LLM - and their shortcomings.
"Another challenge for multilingual models comes from disparities in the amount of data they train on in each language. When analyzing content in languages they have less training data for, the models end up leaning on rules they have inferred about languages they have more data for. This hampers their ability to understand the nuance and contexts unique to lower-resource languages and imports the values and assumptions encoded into English. One of Meta’s multilingual models, for instance, was trained using nearly a thousand times more English text than Burmese, Amharic, or Punjabi text. If its understanding of those languages is refracted through the lens of English, that will certainly affect its ability to detect harmful content related to current events playing out in those languages, like the Rohingya refugee crisis, the Tigray war, and the Indian farmers’ protest."
https://www.wired.com/story/content-moderation-language-artificial-intelligence/
As more and more content will be generated by #AI, we, the people, will understand less and less.
@Morn Thats funny. I got the same sentiment from ChatGPT. #LLM #ChatGPT #GoogleBard
Unit tests are NO boilerplate! They are a very important part of engineering correct and maintainable software!
Please get this right!
#ArtificialIntelligence #LLM #LLMs #MSBuild #SoftwareEngineering #SoftwareDevelopment #UnitTest #TDD
#chatGPT is like autocomplete; it's only as intelligent as the thought it's completing. #ai #llm #generativeai #gpt
Finished @baldur's provocative book, The Intelligence Illusion, last night and wrote up a short review on The Storygraph.
tldr: Buy it, read it, spread the word!
#AI #AGI #LLM #ChatGPT #bookstodon #AmReading
https://app.thestorygraph.com/reviews/f8651234-2105-4280-97bc-24836f83b867

Note that the training data heavily relies on the Bible and its translations. Lots of bias there.
Meta unveils open-source #AI models it says can identify 4,000+ spoken languages and produce speech for 1,000+ languages, an increase of 40x and 10x respectively.
So writing helper AIs are dropping everywhere now. You, too, can sound like a Linkedin person with such insightful gems like this
"In a world of complex social media language and trendy hashtags, let us remember the importance of clear and concise communication. 🌟💬 #ClarityOverJargon #SocialMediaPosting 📝"
It even added emojis ... ugh. All socials will be filled with SPAM nonsense like this via these bullshit cannons in 5..4..3...
#DonaldKnuth on #ChatGPT.
https://cs.stanford.edu/~knuth/chatGPT20.txt
Among other good points:
"I find it fascinating that novelists galore have written for decades abt scenarios that might occur after a 'singularity' in which superintelligent machines exist. But as far as I know, not a single novelist has realized that such a singularity would almost surely be preceded by a world in which machines are 0.01% intelligent (say) & in which millions of real people would be able to interact w/ them freely."
#GeneralCatalyst, Andreessen #Horowitz bet on large language models [#LLM's] for health care..."
https://www.statnews.com/2023/05/16/artificial-intelligence-health-care-llm-general-catalyst-a16z/
#Arrival is just one of the best films ever!
If you've seen it already, watch it again - but this time with #LLMs in mind. It gives a whole new dimension to the film. 🙃
#LLM #Movies #ArtificialIntelligence #AI #Language #Communication
I exercise #LLM chatbots by asking questions about a subject like the history of #hypertext which I know pretty well.
It’s a topic that’s obscure, but well covered in Wikipedia and other web sources.
A top level: “Tell me about the history of hypertext” generally gives what sounds like a superficial C+ summary.
But followup questions produce coherent sounding answers which blithely confound names of people and organizations in any way related to hypertext.
Danforth never fully recovered from looking over the edge of the crater the explosion had left behind. One glimpse into the gaping abyss where the activists had blown up InfiniteAI’s headquarter, and he went insane. „It’s monkeys and typewriters all the way down“, he whispered, clenching his teeth in terror. #ai #chatgpt #llm
🎯 Explainable AI suffers from an epidemic. I call it Explainability Washing.
💡Think of it as window dressing—techniques, tools, or processes created to provide the illusion of explainability but not delivering it
Let’s use this hyped example from OpenAI. Title is sensational-- Language models can explain neurons in language models.
But is that the case? Let's dig in. 👇
https://openai.com/research/language-models-can-explain-neurons-in-language-models
#AI #ML #XAI #gpt4 #chatGPT #Bard #LLM #NLP #HCI #UX #UI #responsibleai #mastodon
1/n
I have never used an LLM or “AI" to write anything, and I will never use LLMs to write anything, even the most boring emails
I also promise to do whatever I can to sabotage LLMs and AI whenever possible
I’ll delete this if/when I ever change my mind
Thanks to @clive for this hilarious example of #prompt injection on #Twitter -- beware of letting the public interact directly with your #AI or #LLM, especially if it can post to the socials in response
Also (Medium members only) check out Clive's full article on prompt injections. https://clivethompson.medium.com/the-devious-genius-of-prompt-injection-attacks-on-todays-big-language-models-23ef4ffa6239