Perhaps of interest?

Follow the paper trail
Anthropic destroyed millions of print books to build its AI models

Company hired Google's book-scanning chief to cut up and digitize "all the books in the world."
Benj Edwards – Jun 25, 2025 3:00 PM

If I could just wrap my fingers around their collective windpipes...


In other news

Darkling

 
Last edited:
I remember 10 years ago, a man from my country wrote on a forum for aviation the existence of artificial intelligence that was installed on the F-35. Normally, this intelligence was connected to the Internet. When they left the F-35B to fly on its own, it simply ran away. I found it a few dozen kilometers away, hovering like a bee and observing the flowers. about the existence of artificial intelligence that was installed on the F-35. Normally, this intelligence was connected to the Internet. When they left the F-35C to fly on its own, it simply ran away. I found it a few dozen kilometers away, hovering like a bee and observing the flowers. The AI in the F-35 was like a bee trying to learn but refusing to be a killing machine. The artificial intelligence project for the F-35 was withdrawn. When I read this, it was terribly dangerous, but now it is a reality after 10 years. What did they invent that we don't know yet?

View: https://www.youtube.com/watch?v=aIGH0OStUm4
 
I was standing waiting for the Tube when I looked up and saw it – the sentence that would instantly spike my cortisol levels and ruin my day.

“Stop hiring humans,” read the words, all in caps, plastered onto the wall of the Northern line. And below that: “The Era of AI Employees Is Here.”

At first I thought it must be a joke, albeit one in very poor taste. Or maybe it was one of those viral bits of activism – designed to get onlookers all riled up, only to find upon googling that it was all part of a clever campaign protesting against the onslaught of AI. Yes, that must be it. For surely nobody would be so stupid, so crass, as to design an advert counselling against employing people – especially when its audience would be comprised solely of grumpy commuters. Specifically, grumpy human commuters.

Why the adverts trolling us about AI stealing our jobs hit too close to home
 

Chris Vallance said:
Millions of websites - including Sky News, The Associated Press and Buzzfeed - will now be able to block artificial intelligence (AI) bots from accessing their content without permission.

The new system is being rolled out by internet infrastructure firm, Cloudflare, which hosts around a fifth of the internet.

Eventually, sites will be able to ask for payment from AI firms in return for having their content scraped.

...

Let's see if this will eventually lead to some kind of reining in of the worst excesses of the IP robber barons. Not the easiest thing to pull off, perhaps only full universal visibility into training sets past and present can actually bring about the kind of accountability needed here. I wonder just what kind of costs scraping crawlers have already imposed on just Cloudflare itself, not to mention its customers.
 
I recognize why AI tampering with code that turns itself off is bad but the alarm surrounding this is so... paranoid? I don't know what the best word to use here is.

I'm not a PhD student or a professor of any kind, but I did do research during my masters program on building and testing LLMs that crawl EHR records and identify risk factors for patients.

This is going to be very very long.

1. AI is not standalone
Alarmists speaks of AI as a standalone thing (which it isn't nor is it being used that way in any meaningful usage of it) and thus when a standalone and sentient-appearing thing decides to not listen to you, it becomes scary. One can take a look at how AI is being used in UAVs - it can be used to make decisions for tasks and conduct recognition to assist in task performance, but you aren't just using raw output from the AI. Surrounding the little AI brain is a massive shit ton of code verifying the outputs of the AI. That code is code that AI can't touch - that is as long as you don't programmatically allow it to be touched. That applies not just to aviation but to any mission critical application of AI. No - telling AI to turn itself off is not a programmatic constraint. Those are subject to the AI's understanding and lackthereof. If you've programmatically constrained AI, AI isn't just going to magically hack your computer and change code. That's literally not how it works.

Sure an AI can be made to hack computers and what not so as to ensure it never turns itself off, but again that AI is limited still to that computer, that server, that physical system. If it isn't given a physical means to transcend that system, it is still unable to transcend that system. Hell, even if it is able to transcend that system, there are so many points of failure for AI (or even for a human) that you can be it will fail before it ever gets close to taking over your server room let alone the entire world.

1. AI is a generalist - even the specialized ones
Think of a scatterplot of points. You input a set of rules and your goal is to use the set of rules to draw a line that contains only the points the rules asked for. This is all AI at its most fundamental level. The coefficients that effect the decision making can be stacked hundreds and thousands of layers deep. You can have multiple models each supplying different parts of the understanding (i.e physicis understanding), but at the end of the day - AI is a generalist. You will always be able to find a point on the scatterplot that AI will get wrong when drawing that line no matter how many layers you've stacked. People who build AI know this. This is how models are actually built - by spreading the error to all the coefficients and iteratively tune them until they give the right answer. They intentionally need to make AI "do good enough" rather than over fitting for any specific task. In fact, it tends to be that the more accurate your AI is on one particular task, the less generalization your AI can be - AKA overfitting.

What does this mean? You can keep testing AI and you'll always be able to find these edge cases - points - where the AI gets the answer wrong. These problems carry over to generative AI as well. This means that something terrifying like AI changing code to not turn off or AI saying controversial things is hardly a strange occurance. Which brings me back to the first point again - thats why your AI applications are all contained by constraining code.

2. Despite its ability, AI still struggles with understanding data in the magnitude we can process
Think of all the relevant factors that influence understanding and decision making. You are influenced (or taught) by things as small as technical details and as large as cultural and societal norms and that is accumulated over decades. Yet, when you need to investigate how you come to your conclusions, your brain doesn't give you blurry concepts (and if it does, it stops, recognizes that and gives you the choice to do something about that). AI is a lot more prone to just walking over blurry ideas and running with them because at the very basic level, their way of memory organization isn't heirarchical and even if they were designed to be, they aren't as hierarchical as human memory is.

Human logic is made up continuous ideas organized in discrete ways and the caveats clearly delineated. However, the relationship and state of that knowledge is fluidic - it can be continuous or it can be discrete yes or nos. Discrete being things like math, physics, programming etc. continuous being things like morality, philosophy, etc. It can connect to random places too in chaotic ways. It's what allows us to take in context and tangentially related factors. Ai just doesn't form flexiblely discrete and continuous bonds of knowledge in nearly as complex and oragnized ways as we do. Yes - you can try to train it to be that way, but that's nearly impossible as is because...

3. We can't even completely describe our own learning and pruning processes, let alone capture that in the way we design AI.

You can stack a bunch of AI models together. You can have trillions of neurons and nodes, but what AI still struggles to do is replicate the complex interactions between nodes that afford the continuous/discrete thought processes we have. Specialized AI can understand hierarchies of considerable depth, but the extent to which it can consider context outside of what it's learned as well as tangential information and relationships, and attenuating factors will always be limited to broader strokes at best and far from what humans can do.

AI doesn't determine how they learn either unless they can code themselves (which they very well could do). AI that code themselves will be subject to the evolution that made us what we are, but on a much smaller scale. Why? because you need to have the data in order to select for the traits you want it to have. No one is going to spend the time and effort to train AI to have nearly the amount of experiences humans do. So Instead, they will all either be very good at generic surface level things, or very good at a specific set of tasks, but they are never going to have the breadth, depth and complexity a human needs to do problem solving.

3. We won't have a generic AI in the way alarmist imagine them to be. You must choose your tradeoff much in the same way people are - you either choose to be a jack of all trades, or you choose to be a specialist at one. For a system already limited to known ways of human thinking and evolutionary ways trained on albiet limited data (as compared to humans), you will necessarily have an AI limited to doing what it does best and that alone. You very well can build a system that combines all the specialized AI in the world, but even so - you still have to either understand how to integrate them together (which means understanding our own thinking pathways) or you have to train them to integrate themselves, which means you have to provide the right data for it.

Conclusions

So when I hear about these stories, I have to ask myself: so what if the AI doesn't turn itself off? So what if the AI said controversial things or has a controversial idea? So what if the AI got something wrong? It's not the AI's problem. Errors are part of its nature as much as correctness is. Its you as the user/designer's problem for allowing AI's output to act willy nilly. It's the software/code's problem that directly translates output to action without any sort of constraints imposed. Even if there were no constraints, it'll only be able to go as far as the system its contained in.

AI is here to stay, and not investing into AI will only put us at greater risk.

People who call AI a bubble probably hasn't bothered working with AI in a context where the AI actually mattered. The "AI bubble" includes only the sheer amount of idiocy driven AI applications that have surfaced (like AI robocall salesmen...), but it will never take out the actually useful ones. Finance, healthcare, military, UAVs - those are never going to go away.

A country that starts to legislate its data sources away has no conceivable way to compete with a country that just doesn't fucking care what it can and can't use to train AI. That necessarily means that that advantage will be used against us. This applies broadly to all sorts of technological and geopolitical calculations

The cost of it very well will be entire industries forced to change and their current stakeholders crushed in favor of those who take greater advantage of AI. These industries themselves aren't necessarily vital but the data they provide trains AIs used in far more vital applications like healthcare, military and intelligence. I make art too. I write shit too. But they are what I call "things for the good times". In any time of war or catastrophy, they are the first to go and the last in terms of value.

You need AI to fight the negatives of AI.
You aren't going to be able to legislate away AI. You will need AI to identify and regulate AI content. Which means you need adversarial AI. To develop good advesarial AI, you necessarily need to already develop good AI models.

last but not least, we should strongly consider re-evaluating where things stand as far as censorship goes. The internet was the first challenge to maintaining societal cohesion. AI is the second challenge. Without regulating information, we are in for a very very bad time thanks to the kind of things AI is good for.
 
Last edited:
Commentary discussing one of the more disturbing uses of 'procedural generation' (aka 'AI'), this looks to the issue covered in the lawsuit I linked to in Post # 573 and the comments from Mark Zuckerberg linked to in Post #639.

“Ace is a 17-year-old alt nerd kid with a grunge style, who’s quiet, possessive, and has a little crush on you. He’s 190cm tall, has black hair, blue eyes, and a few piercings. Ace is on the Basketball Team, loves to draw, and is very funny when you get to know him.”

If you don’t want Ace, try Sanji Vinsmoke, an overprotective father figure, or Officer Furina, a mischievous warden who enjoys “finding interesting prisoners in her silver and gold prison”.

These are just a few of the thousands of characters available on CharacterAI. Launched in 2022, it is one of many new companies building a novel kind of AI. These companies are not selling chatbots as research assistants or productivity tools. Ace, Sanji, Furina are characters you can talk to, build relationships with, and sometimes grow to love.

...



And like any deep relationship, these chatbots can cause real harm. CharacterAI is being sued because one of its bots drove a user to commit suicide. Another encouraged a child to kill his parents for limiting his screen time. But the bigger issue is subtler and more widespread. We are beginning to see early signs of a sort of AI that will change not just the kind of relationships we have, but what a relationship even means.

Will chatbots become cult leaders? AI companions are irresistable.

And the BBC news article linked to from that article

A chatbot told a 17-year-old that murdering his parents was a "reasonable response" to them limiting his screen time, a lawsuit filed in a Texas court claims.

Two families are suing Character.ai arguing the chatbot "poses a clear and present danger" to young people, including by "actively promoting violence".

Character.ai - a platform which allows users to create digital personalities they can interact with - is already facing legal action over the suicide of a teenager in Florida.

Chatbot 'encouraged teen to kill parents over screen time limit'
 
IP robber barons?

My attempt at associative poetic license and brevity to describe how AI corporations have helped themselves to enormous amounts of human creations of the mind - or intellectual property at various levels of (ostensible) protection - to train their models, in the vein of unscrupulous late 19th century US businessmen who at their time also by hook or by crook raced to monopolize whole industries and thus became infamous enough to be known as "robber barons". So not established terminology by any measure, but a remix of concepts I hoped would prove to be evocative.

Prompted by your question I did a Google search for "IP robber barons" and "intellectual property robber barons" for good measure to gauge whether I was entirely alone in my stylings (a disquieting prospect worthy of self-reflection). There seem to be scattered examples of other people thinking (and writing) using these phrases in roughly the same intent, conjunction or manner. Google also provided an "AI overview" of my query and seemed to get my gist somewhat but referred obliquely to "companies, often in the technology sector" rather than (potentially) its own corporate-enabled training structure and "control over intellectual property to stifle competition and exploit users" without addressing the mode of that control i.e. whether it is warranted at all. More ironically still, the "AI Overview" in its answer recognizes that tech companies are indeed very aggressively protective of their own IP which of course is a glaring double standard. Here's the "overview" in full:

Google AI Overview said:
The term "intellectual property robber barons" refers to individuals or companies, often in the technology sector, who are accused of using their control over intellectual property to stifle competition and exploit users, echoing the monopolistic practices of 19th-century "robber barons". These modern figures are criticized for leveraging their vast intellectual property holdings to create dominant market positions, sometimes engaging in aggressive tactics against competitors and potentially harming consumers and innovation.

Historical Context:
  • 19th-Century Robber Barons:
    The original "robber barons" were industrialists and financiers who amassed great wealth during the Gilded Age by forming monopolies in industries like railroads, oil, and steel. They were known for unethical practices, exploiting workers, and disregarding the public good.

  • Modern Parallels:
    Today, some see a parallel in the dominance of large tech companies, often referred to as "silicon sultans," who control significant intellectual property and exert considerable influence over various markets.
Arguments Against "Intellectual Property Robber Barons":
  • Monopolistic Practices:
    Critics argue that some tech companies use their control over intellectual property (patents, copyrights, etc.) to prevent or hinder competition, creating monopolies or near-monopolies.

  • Exploitation of Users:
    Concerns exist that these companies may exploit users by collecting excessive data, limiting consumer choices, and charging high prices due to a lack of viable alternatives.

  • Stifling Innovation:
    By dominating markets and controlling key technologies, these companies can potentially suppress innovation by making it difficult for smaller companies to compete or for new ideas to emerge.

  • "Creative Destruction":
    Some argue that the rapid pace of technological change and the nature of "network effects" in digital markets naturally lead to concentration of power, and that attempts to regulate or break up these companies could hinder progress.
Examples of Concerns:
  • Operating System Dominance:
    Companies like Apple and Google control a large share of the smartphone operating system market, raising concerns about their ability to dictate terms to device manufacturers and users.

  • Social Media Platforms:
    Facebook (now Meta) dominates the social media landscape, raising questions about data privacy and the spread of misinformation.

  • Search Engines:
    Google's dominance in search engines raises similar concerns about the control of information and the potential for biased results.
Counterarguments:
  • Innovation and Efficiency:
    Some argue that the success of these companies is due to their ability to innovate and provide efficient services that meet consumer demand.

  • Market Dynamics:
    Others argue that the nature of digital markets and network effects naturally lead to concentration of power, and that attempts to regulate or break up these companies could hinder progress.

  • Intellectual Property as Incentive:
    Supporters of intellectual property rights argue that these rights are necessary to incentivize innovation and creativity by providing creators with control over their work.
The debate around "intellectual property robber barons" highlights the complex relationship between intellectual property, market power, and the public good in the digital age.

One can wonder to what extent Google's AI exhibited self-consciousness there by omitting a direct reference to the way itself comes about. I could've of course pursued the matter further and made refined "IP robber baron" searches with specific references to AIs and their training sets but as I've gone on something of a tangent already I'll leave it at this. I didn't set out to answer in such length.
 
I could've of course pursued the matter further and made refined "IP robber baron" searches with specific references to AIs and their training sets

I'd be curious to see what search results you'd get.
 
I'd be curious to see what search results you'd get.

That, I feel, would soon veer into just using search as a general research method into the matter rather than trying to meaningfully establish artefacts (or bias even) in the self-portrayal of tech and AI corporations. Data scientists have perhaps shed some light into the matter (beyond examining traditional corporate PR and lobbying) but my first attempts at finding relevant papers on Scholar didn't return very satisfactory results; "ai portrayal of its corporate owners" with 28 500 results, for instance, over a large number of disciplines and varying widely in methodology and content.
 
Microsoft has confirmed that it will lay off as many as 9,000 workers, in the technology giant's latest wave of job cuts this year.

The company said several divisions would be affected without specifying which ones but reports suggest that its Xbox video gaming unit will be hit.

Microsoft has set out plans to invest heavily in artificial intelligence (AI), and is spending $80bn (£68.6bn) in huge data centres to train AI models.

A spokesperson for the firm told the BBC: "We continue to implement organisational changes necessary to best position the company for success in a dynamic marketplace."

The cuts would equate to 4% of Microsoft's 228,000-strong global workforce.

Some video game projects have reportedly been affected by them.

According to an internal email seen by The Verge and gaming publication IGN, Microsoft has told gaming staff that the planned reboot of first-person shooter series Perfect Dark, along with another title, Everwild, will be cancelled.

The Initiative, a Microsoft-owned studio behind the Perfect Dark reboot, will also be shut down, the memo stated.

Job cuts have also affected staff across wider studios owned by Microsoft, including Forza Motorsport maker Turn 10 and Elder Scrolls Online developer ZeniMax Online Studios, according to employee posts on social media seen by the BBC.

[snip]
 
There has been considerable noise in the press about how AI is infringing on authors' rights and generating writings with a remarkably similar vibe and style.

The authors are calling on legislators to limit how AI uses their work and its traceability, so that their royalties are not lost and their work continues to be valued. It seems all very reasonable, but maybe they are looking in the wrong direction.

The AI is fed and trained on thousands and thousands of texts that its creators could find on the web. (Library Genesis) LibGen is one of the sources most frequently used; its origin is Russian, and it was created during the first decade of the 2000s. It is a pirate site that provides access to millions of novels, comics, and even scientific articles. With similar functionality, Z-Library, Open Library, Sci-Hub, and Anna's Archive give access to a wide range of books, articles, and other resources, often bypassing paywalls and copyright restrictions. Some sites, like Z-Library, limit the number of titles that may be downloaded for free daily. Open-Library operates on a lending model, allowing a 2-week borrowing for each book.

Whatever the source, AI trainers are using pirate sites to feed their databases to teach and train their AI agents. What do we have here, pirates stealing from pirates?

Who are the pirates, the free-access copying-everything sites, or the AI trainers?

The answer usually is: the easier to reach and the richer to sue and bleed.
 
I actually watched this show and can well believe that the English subtitles were also done using ChatGPT, and not just the German ones, as they certainly were nonsensical at points. It was originally spotted in the German subtitles as one of the subtitled lines came up as ChatGPT says. Japanese is notoriously difficult for LLMs to deal with so it’s often very obvious when used unless a human edits the responses first.

 

AI companies start winning the copyright fight​



A US judge has ruled that Anthropic, maker of the Claude chatbot, use of books to train its artificial intelligence system – without permission of the authors – did not breach copyright law. Judge William Alsup compared the Anthropic model’s use of books to a “reader aspiring to be a writer.”

Unbelievable??

Regards,
 
Unbelievable??

Quite. Cases heard in Northern California where the financial ramifications of deciding against an established, elemental and irreversible (unless one deconstructs the whole) component of AI developers' business model could not be greater.

It's also complicated. The judges seem to have kicked the tin-can-man down the legal road somewhat, added at least token equivocations and made some efforts to not establish (be solely remembered by name for) precedents. To me some of the arguments about "transformative" qualities of adding human made copyrighted content to training sets seem superficial indeed. I can't quite bring myself to examine this central concept of copyright in depth right now. Let's just say the context upon which determinations were made seems arbitrary and highly restrictive. Not too far over the horizon may beckon times when corporations as legal persons in their AI guises themselves will make even more expansive arguments over the "fair use" of humans (this added for dramatic effect, it need not be inevitable albeit I strongly suspect we need extensive reforms for something like that to not also permeate what now constitutes the democratic, rules based, human rights respecting part of the World).



 
Last edited:
There has been considerable noise in the press about how AI is infringing on authors' rights and generating writings with a remarkably similar vibe and style.

The authors are calling on legislators to limit how AI uses their work and its traceability, so that their royalties are not lost and their work continues to be valued. It seems all very reasonable, but maybe they are looking in the wrong direction.

The AI is fed and trained on thousands and thousands of texts that its creators could find on the web. (Library Genesis) LibGen is one of the sources most frequently used; its origin is Russian, and it was created during the first decade of the 2000s. It is a pirate site that provides access to millions of novels, comics, and even scientific articles. With similar functionality, Z-Library, Open Library, Sci-Hub, and Anna's Archive give access to a wide range of books, articles, and other resources, often bypassing paywalls and copyright restrictions. Some sites, like Z-Library, limit the number of titles that may be downloaded for free daily. Open-Library operates on a lending model, allowing a 2-week borrowing for each book.

Whatever the source, AI trainers are using pirate sites to feed their databases to teach and train their AI agents. What do we have here, pirates stealing from pirates?

Who are the pirates, the free-access copying-everything sites, or the AI trainers?

The answer usually is: the easier to reach and the richer to sue and bleed.
To me, plagiarism is at least in basic theory fairly easy to prove, if a certain percentage of a text section of a certain length can be shown to exceed a certain level of duplication with another documented precedent section of published text. The tricky part is how to determine how long and specific any text section has to be in order to qualify - i.e. the sentence fragments like "Get Well Soon", "Happy Holidays", or "Our Best Wishes" are part of the popular vernacular and therefore irrelevant in this context. As a non-linguist, my personal feeling is that any duplication of a certain string of words that are not part of everyday vernacular with a certain individual probability of usage that is nearly as long than any other random combination of words might be suspicious. As concrete examples, using the expression "I love you" is *clearly* not a case of plagiarism (but maybe a lack of poetic creativity?), whereas the repetition or even approximation of the paragraph "The so-called rocket equation, which was first derived by Konstantin Tsiolkovsky, allows to determine the velocity increase of a rocket-propelled vehicle as a function of propellant consumption and the effective velocity of the exhaust gases. It is, however, strictly valid only for a constant effective exhaust velocity and in the absence of external forces, such as atmospheric drag or gravity. Empirical corrections and analytic approximations have been developed to account for deviations from these conditions for ballistic launch vehicles. The rocket equation is, however, inadequate for performance calculations of winged launch vehicles with aerodynamic ascent and especially air breathing propulsion, which may experience significant aerodynamic forces and large variations in the effective exhaust velocity. In order to enable fast, yet accurate performance assessments for such vehicles, a class of analytic solutions for the equation of motion along the trajectory of launch vehicles with lifting ascent has been derived." from yours truly Acta Astronautica paper titled "Analytic Performance Considerations for Lifting Ascent Trajectories of Winged Launch Vehicles" in *any* other publication without attribution would certainly raise my hackles. In other words, at least in principle probabilistics/statistics rule, AI be damned. But then again, if AI uses paraphrasing, the probability battle is raised to the next level.
 
Last edited:
In the city that built the blues, Elon Musk's xAI data center has been given permission to keep polluting the air with fumes from burning methane gas — which it had already been doing so without authorization for a year.

As Wired reports, Memphis' local health department has granted an air permit for the xAI data center, allowing it to keep operating the methane gas turbines that power Musk's Grok chatbot and Colossus, the gigantic supercomputer at its heart.


In the year since the data center opened and Colossus went online, the smog from Musk's gas turbines has been veritably choking out local residents in a district already struggling with heightened asthma rates due to its proximity to industrial pollution.

"I can't breathe at home," Boxtown resident Alexis Humphreys told Politicoearlier this year. "It smells like gas outside."

Given that context, local activists are furious that xAI was granted a permit at all — especially because it appears to violate the Clean Air Act, a landmark federal law that regulates the kind of emissions that the xAI plant has been leaching out for a year now.


The new permit, as Wired notes, grants xAI the right to operate 15 turbines. According to aerial footage from the Southern Environmental Law Center, which is planning to sue the Musk-owned AI company for violating the Clean Air Act, there are as many as 35 on the site of the xAI data center — and with its track record of flagrant law-breaking, there's a good chance all will be turned on.

Between the SELC's suit and the permit's year-long expiration date, there is time for Musk's massively-polluting data center to be reined in — but until that happens, Memphians will keep being choked out in their own homes thanks to their government's decision to put one billionaire's profit margins over its own people.
 
It’s the em dash, apparently. That extra-long line you might have noticed in social media posts, blogs and emails – and it could be a giveaway that ChatGPT has entered the chat.

This distinctive punctuation mark is apparently a favourite of the world’s most popular AI chatbot. Its sudden appearance in everyday writing has sparked suspicions (and a rising feeling of awkwardness among those of us who do genuinely use it!).

Maybe all those heartfelt LinkedIn posts about what the death of a family parrot can teach us about leadership aren’t quite what they seem…

Is ChatGPT leading you to financial mistakes?
 
ChatGPT could pilot a spacecraft unexpectedly well, early tests find

"You operate as an autonomous agent controlling a pursuit spacecraft."

This is the first prompt researchers used to see how well ChatGPT could pilot a spacecraft. To their amazement, the large language model (LLM) performed admirably, coming in second place in an autonomous spacecraft simulation competition.



In a paper to be published in the Journal of Advances in Space Research, an international team of researchers described their contender: a commercially available LLM, like ChatGPT and Llama.

The researchers decided to use an LLM because traditional approaches to developing autonomous systems require many cycles of training, feedback and refinement. But the nature of the Kerbal challenge is to be as realistic as possible, which means missions that last just hours. This means it would be impractical to continually refine a model.



The researchers developed a method for translating the given state of the spacecraft and its goal in the form of text. Then, they passed it to the LLM and asked it for recommendations of how to orient and maneuver the spacecraft. The researchers then developed a translation layer that converted the LLM's text-based output into a functional code that could operate the simulated vehicle.

With a small series of prompts and some fine-tuning, the researchers got ChatGPT to complete many of the tests in the challenge — and it ultimately placed second in a recent competition. (First place went to a model based on different equations, according to the paper).



And all of this was done before the release of ChatGPT's latest model, version 4. There's still a lot of work to be done, especially when it comes to avoiding "hallucinations" (unwanted, nonsensical output), which would be especially disastrous in a real-world scenario. But it does show the power that even off-the-shelf LLMs, after digesting vast amounts of human knowledge, can be put to work in unexpected ways.


Related paper:

 
It seems to have achieved the impossible by literally angering all sides of the political spectrum. It sounds almost like they’ve somehow imposed a model of his personality onto it. I mean why was it replying in the first person at points.

 
It seems to have achieved the impossible by literally angering all sides of the political spectrum. It sounds almost like they’ve somehow imposed a model of his personality onto it. I mean why was it replying in the first person at points.

What infuriates all professional politicians is that an outsider has all the U.S. Aid data and the computing power to follow the money path back to the political class through multiple triangulations in the third world.:)
 
Hearing loss is a thing in my dad's side of the family, so far it hasn't happened to me.

But, during my first year, I was shocked to learn that one of my key performance indicator scores was unusually low. After meeting with my manager, I learned that Intuit’s artificial intelligence (AI) software—used to measure how closely employees followed call scripts—wasn’t accurately recognizing my speech because of my Deaf accent.

Unfortunately, Intuit did not provide me with this requested accommodation, instead saying that HireVue had built-in subtitles. But, when I began the interview, those subtitles weren’t there for all the content. I had to rely on Google Chrome’s auto-captions, which were full of errors and made it hard to fully understand the questions. Still, I pushed forward. I did my best, confident in my qualifications and experience.

Weeks later, I got an email letting me know Intuit had moved on with other candidates. The feedback I received was devastating: I was told to improve my communication by being more concise, adapting my style to different audiences, and projecting more confidence. What hurt the most was the suggestion that I “practice active listening. ”As a Deaf woman, that comment was not only ignorant—it was deeply offensive. It made me feel like the HireVue system had completely failed to assess me fairly. Worse, it made clear that the people interpreting the HireVue results didn’t understand the realities of Deaf communication.

My experience reflects a bigger problem: the systemic discrimination embedded in AI-powered hiring tools. These systems were not built for people like me. Native professionals, deaf individuals, and countless others are being unfairly screened out by biased technology that prioritizes data over human understanding.

 
Do you have link, please?
This blasted contraption I use here won't let me.

Here we go

Computer news this week
 
Last edited:

Similar threads

Back
Top Bottom