If you want to test the waters of the *other* network, this tool makes it easy — and on Tuesday, it'll go away.
Takes a few minutes. It'll make your life better.
https://fedifinder.glitch.me
A new AI lawsuit, brought by Getty images (not yet docketed). Getty is suing for trademark infringement, not just for the AI using the Getty watermark, but it’s use with “bizarre” and “grotesque” AI output: https://twitter.com/michaelkasdan/status/1621738853152309248…
— his model performs #lobbying tasks:
- Letter writing — e.g., changes to bills
- Future: Other talking points?
- Future: Strategy with particular legislators?
Coverage by
Use case for #ChatGPT + #LLMs: Simplify verbosity.
I've been calling this method Gen-Ex (Generative-Extractive) #AI — giving the LLM existing text, and asking the model to transform it. Less BS than "pure Generative AI." More usefulness.
I've been thinking a lot about this — #GPT and #LLMs hallucinating both (1) propositions and (2) citations — and to combat it, I think there are (at least) three options:
1. Bullshitter
2. Searcher
3. Researcher
Details in this thread...
1/n twitter.com/samuelharden/s…
Which to choose?
#1 BULLSHITTER is a nonstarter.
#2 SEARCHER seems obvious. But many rabbit holes.
e.g., sentence = BS hallucinated
e.g., sentence recites bad law (e.g., Plessy, Roe v. Wade)
#3 RESEARCHER is harder to build. But most trustworthy. Built atop ground truth
5/n
3. RESEARCHER. Atomize the <PROPOSITION> + <CITATION> graph. Give each <PROPOSITION> and <CITATION> unique identifiers. User's query builds most-common ground truth (non-hallucinated).
4/n
2. SEARCHER. LLM generates text. Run queries to substantiate (or debunk) the text.
This is like a senior partner saying "I'm pretty sure there's a case out there that says X. Find it!"
Good luck.
3/n
I've been thinking a lot about this — #GPT and #LLMs hallucinating both (1) propositions and (2) citations — and to combat it, I think there are (at least) three options:
1. Bullshitter
2. Searcher
3. Researcher
Details in this thread...
1/n
Can confirm that ChatGPT hallucinates case names and citations. Two big problems:
1. These *look* correct - the format is right, the courts exist, etc.
2. It does this confidently and without any context clues that it's fabricating them twitter.com/damienriehl/st…
https://buff.ly/3DsFcL6
AI summaries help users triage documents to review
"Users have to balance the utility of the function against its level of accuracy, "
Not good when standard tools can brute-force 1/6 of your workforce:
"Within the first 90 minutes, the watchdog was able to recover nearly 14,000 employee passwords, or about 16% of all department accounts, including passwords like ‘Polar_bear65’ and ‘Nationalparks2014!’."
I know the headlines these days are all about ChatGPT replacing lawyers but all that overshadows the bigger story about the changing nature of law work. And it’s happening right now in corporate legal.
Fascinating real-world integration of ChatGPT into an existing #legaltech product. One of the things it’s REALLY good at is summarizing - so this is smart. Nice work,
Exciting development alert! Two types of #GPT and #ChatGPT applications:
1. GENERATIVE AI. Create things.
2. EXTRACTIVE AI. Pull things from existing text.
Today,
Today Docket Alarm is releasing its first (and perhaps the legal tech industry’s first) integration with GPT3. All litigation filings on Docket Alarm (e.g., PACER, state courts, SCOTUS, IPRs, etc.) can now be auto-summarized into 3 easy bullets by GPT3.
Oh, and #ChatGPT will provide exemplar facts:
PROMPT: For this factual claim — "OpenAI's actions were the direct cause of Plaintiffs' injuries" — provide factual examples of how a large-language model on training text would cause an author of that training text to lose money.
4. Next, ask #ChatGPT to include relevant #facts.
...that a human #lawyer can use to write a #brief.
How long would it take a low-level #associate to flesh this out? These prompts: Less than 1 minute.
Humans + #NLP > Humans alone
Humans + #NLP > Machines alone
#Gestalt
— where #ChatGPT passed all 4 law-school exams.
But #LLM did score low:
- CON LAW: B (36th/40 students)
- EMPLOYEE BENEFITS: B- (18th/19)
- TAX: C- (66th/67)
- TORTS: C- (75th/75).
Complementary to
The most economically impactful use-case for LLMs is not chatbots
It's in behind-the-scenes gluing together of processes/APIs/workflows/data sources/etc.
Latter is more robust to the issue of hallucinations, because the LLM has many touch-points w/ grounded data along the way
This simple thing has taken YEARS and a pandemic. Kudos to Fix the Court for pushing on this issue over and over and over and over and over again. LIVE STREAMED ORAL ARGUMENTS FOR EVERYBODY! https://twitter.com/FixTheCourt/status/1617338460909752320…