This Project Stargate announced...and millions of scifi fans died inside.:p
 
The real project in Cheyenne Mountain is codenamed "Wormhole Xtreme."
MV5BOWJmMzM5NzEtMDQwYS00ZjM4LWE0YzgtNDY1NDA5ZTdlMzJmXkEyXkFqcGc@._V1_.jpg
 

 
Last edited:
Swedish warplane maker Saab and Germany-based Helsing sent a Gripen-E combat jet aloft in late May powered by an artificial-intelligence agent that took control of long-range flight maneuvers from the human pilot, the companies announced.

The series of three test flights above the Baltic Sea constitutes the first time that an AI application was in charge of real-world maneuvering, recommending missile shots at a Gripen training aircraft from a distance and evading disadvantageous flight paths that could turn dicey in a closer dogfight

 

DARPA's Artificial Intelligence Reinforcements (AIR) program aims to take what the agency learned during Air Combat Evolution (ACE) trials, which saw AI-piloted F-16 fighter jets engage in dogfights with human pilots, and turn the autonomy up to 11. DARPA hopes it emerges from the AIR program with more AI-equipped F-16 fighters that are tactically autonomous, making them able to operate in multi-ship configurations beyond visual range.

While not as old as the ACE program, AIR isn't new, with its original solicitation published way back in 2022. Today's news pertains to an $11.3 million contract modification quietly awarded to Systems & Technology Research (STR) on Wednesday for more work on the program. Described as funding for "Option One" of the AIR program in the DoD's contract notification, a DARPA spokesperson told us the award is actually for AIR's second of two phases, meaning that the project is advancing.
 

Airbus and Shield AI complete first MQ-72C aerial logistics connector autonomous flight​

 

U.S. Navy Begins Search for Machine Learning Combat Assistants on Submarines​

The RFI lays out three core capability updates; a tactical control re-architecture and integration plan, a payload re-architecture and integration plan, and the development and integration of a new Artificial Intelligence, Machine Learning (AI/ML) Tactical Decision Aid (TDA).
 

AFA 2025 — Venerable aviation supplier GE Aerospace teamed with seven-year-old Merlin Labs to add AI to GE avionics used on a wide range of military and civilian aircraft, the companies told Breaking Defense ahead of an announcement at the Air Force Association annual conference here.

The firms’ first target is likely to be the Air Force’s planned cockpit overhaul for the aging KC-135 tanker, the Center Console Refresh (CCR), for which the formal competition could kick off as early as this fall, executives told Breaking Defense. But in the slightly longer term, they envision their combined product assisting human pilots on multiple aircraft, allowing the human crew of, for example, the C-130J transport to be cut from two to one — and even, ultimately, to zero.

The current plan is to have a human pilot “in the loop” to oversee, and if necessary, override the AI, GE general manager for “connected aircraft” Jeremy Barbour told Breaking Defense. But as Merlin’s technology matures, he said, “I’m excited about where the relationship could grow over time, so we’ll see how that evolves.”

[snip]
 

A decent write up of what AI gets used for in ABMS, what does work, what doesn't work and why.

Background:
At a recent experiment staged by the Advanced Battle Management System Cross-Functional Team, for example, vendors and Air Force coders used AI to do the work of “match effectors”—deciding which platforms and weapons systems should be used against a particular target and generating Courses of Action, or COAs, to achieve a military objective.

Findings:
  • AI very quickly generates Courses of Actions (COA)s and generates many of them, but sometimes did not take into all necessary factors.
    • A given example was that AI assigned COA that included engaging using IR guided munitions when there was heavy cloud cover, which means it failed to account for the weather aspect.
  • All forms of AI (not just generative AI) has problems with accuracy and bias, and some models trained on more historic information would form stronger biases towards prior information that ends up being inaccurate.
  • AI's limitation requires one to carefully fence off AI systems during deployment
    • Flight and critical systems that don't need creativity or wide scope of thinking should be insulated from AI.
  • AI's limitations requires human in the loop to second guess, and double check the conclusions AI generates.
  • Massive underlying infrastructure to make AI work.
    • Namely data processing, formatting, structuring etc for training AI / inputting to AI, data integrity
    • Human resources - people who's job is to work with said data to ready them for training and developing AI pipelines.
    • Hardware and software scalability problems that support all the data and tools for development that come from different vendors.
This article touches on a lot of the things I've written about at length on this forum, namely, the fact that AI is, in fact, a data driven statistical model and not a human and while it can out perform humans in many ways, it thinks differently and performs different functions than a human

You have to have the encapsulating software verification on the outside to regulate the AI's outputs. Fencing off critical systems on a plane isn't terribly hard, but regulating what kind of output it creates for something like ABMS is much harder.

In order to have productive AI, the developer must understand exactly how the AI came to its conclusion instead of leaving it to just be the difference of a single decimal digit that made the difference. This means that you have to peel back model by model, layer by layer, line by line of input data to figure out which layer(s) and model(s) is reacting wrongly to what kind of input data.

With a single aircraft or a small group of aircraft and scope limited AI (i.e air to air engagement planning for your squad), this is far more feasible. With ABMS, untold number of factors come into play - weather, fuel, kinematic performance, positioning, weapon make up etc etc. The sheer volume of information used to train the models of such a system as well as the sheer amount of information being input and output from the system introduces the problem of scalability.

Scalability directly translates to hardware scalability, data integrity and quality, and the man power behind polishing and massaging that data so as to ensure you are teaching the AI the correct conclusions you want it to draw. The article makes mention not just of generative AI's hallucinations, but also a model made to do targeting having flawed decision making thanks to it's bias of prior data. For any number of these unforeseen problems in understanding, you have to have the right data as well as implement the right algorithms to ensure the model consumes the right data at the right places.

I don't think the accuracy problem is necessarily major. Instead, it's the difficulty surrounding the integration of a vast number of vendors of hardware and software. AI absolutely needs to be in place for the speed and sheer quantity of parameters it can compute. Even offering sometimes low quality suggestions is a lot faster of a process than it is for the user to do the whole job of determining COAs by themselves (which the article quotes as being 16 minutes). Yet the complexities and considerations that are required for this are:
  • On the user's end, the user must understand AI in ABMS as a suggestion tool.
  • People at the top must understand AI is a data driven statistical model and not a human. You can't take it's human likedness for granted. Which means you need a stable pool of people to constantly procure the right data, update the models with the latest algorithms, and help integrate more models or new sensing nodes. You also can't start slashing people willy nilly or having the wrong expectations of AI or you are bound to make the system unusable.
  • On the software side, the sprawling architecture of ABMS probably implies the DoD owns the code / data for it and either hires or contracts out the updating and development of it. But you need a lot of oversight to make sure that whatever vendors are adding to the system works fine with everything else. It remains to be seen how such a pipeline can work at a fast pace when you have so may params (and attached processing logic) working together and everything being interconnected.
  • On the data side, there's a big data integration problem - every single end node (your F-47s, CCAs, Vbats, even your quad copter ISR drones) need to have the data they transmit follow a single set of contracts that ABMS understands. Any and every new future system needs to be able to turn their data into something ABMS understands and can act on. Either that responsibility falls to the developer of the weapon platforms, sensor nodes and aircraft themselves, or it falls onto ABMS devs to do it on the server side.
If they can find the correct software and data pipelines, AI ABMS would work to expectation. The other issues are more or less non-issues. In the future, whoever's ABMS is better will ultimatly be a decisive factor in a conflict and how good your ABMS is depends entirely on how and how fast your software and data pipelines can update.
 

Air Force AI Targeting Tests Show Promise, Despite Hallucinations​


Navy demonstrates AI autonomy on BQM-177A target​

 
Last edited:

Air Force AI writes battle plans faster than humans can — but some of them are wrong​

So if I, Debbie Downer, were to rewrite that headline?

Air Force AI writes randomly bad Battle Plans 400 times faster than humans can.
 
Fundamental questions never seem to get answered in discussions of AI.

What is AI supposed to be, exactly?

How is what we call AI implemented and how does it produce whatever effects it produces (or seems to produce)?

If AI is a "model" (statistical or otherwise), what exactly is it a model of? If you want to say "human intelligence", how do you define the latter term?
 
AI-developed controller directs satellite in pioneering in-orbit maneuver
 

US Air Force wants AI to power high-speed wargaming​


Navy, Palantir unveil ShipOS in a bid to boost nuclear sub production​

 
Last edited:

Anysignal raises 24 million Series A to advance autonomous RF sensing for space and national security​

The company develops software defined systems that use artificial intelligence to sense, identify, and manage radio frequency signals across contested and congested electromagnetic environments. Its platform is designed to enable autonomous spectrum awareness, adaptive sensing, and rapid decision making for applications spanning space systems, electronic warfare, and advanced radar.
 
Fundamental questions never seem to get answered in discussions of AI.

What is AI supposed to be, exactly?

How is what we call AI implemented and how does it produce whatever effects it produces (or seems to produce)?

If AI is a "model" (statistical or otherwise), what exactly is it a model of? If you want to say "human intelligence", how do you define the latter term?
It's an important question and I'm not even entirely certain computer scientists agree on this.

I generally would only call something "real" AI/ML if it's trained to recognize patterns (and ultimately make decisions) based on real-world data.

Most of the crap being peddled today as "AI" isn't really AI--just the same old traditional algorithms with a new name to try to hoodwink program offices.
 
It's an important question and I'm not even entirely certain computer scientists agree on this.
<snip>
I think that many if not most engineers and computer scientists haven't even considered the question in the rush to monetization. They seem unaware of the fact that really smart people in other disciplines--neurobiology, linguistics, medicine, psychology, philosophy, and theology, to name a few--have struggled with this question for millenia without any sort of resolution. Nor do they seem to remember that an earlier generation of rather more sophisticated computer scientists failed to bring about a supposedly "imminent" AI back in the 1970s and '80s (see Douglas Hofstedter's Pulitzer winning--and amusing--Gödel, Escher, Bach: An Eternal Golden Braid, 1979)

I generally would only call something "real" AI/ML if it's trained to recognize patterns (and ultimately make decisions) based on real-world data.

An entirely sensible respose. But, unfortunately, I can do this right now using a simple program that uses one regular expression (regex) for the pattern-matchng and an IF/THEN/ELSE loop for the decision.

IF <some_text> MATCHES <pattern> THEN <do_this> ELSE <keep_looking>

The more patterns and actions that I supply, the more sophisticated and surprisingly "intelligent" the process can seem. But the "intelligence" is all mine: I supply the patterns to look for and specify what the program does when it finds one. I study representative input data sets and learn what sorts of things we need to:
  1. match
  2. absolutely, positively NOT match.
The second point is critical. You have to constrain the patterns, so that they do not match less or--worse--more than you intended. Get this wrong and results in the "real-world" can be disastrously misleading or even destructive, depending on the decisions specified on the THEN side. For example, if your program is supposed to delete double-word typos (the the) in a set of documents, but you do not constrain the pattern properly, your program might falsely tell the user that there are no double-word typos. Or it might delete everything in the documents.

I don't know exactly how the much-touted Large Language Model (LLM) "AI" programs do their pattern matching. But they seem to use their enormous "training" data sets to determine the likelihood of encountering a particular character sequence. These training-derived probabilities are then assumed to be able to "predict" what will follow any arbitrary sequence of input data. The LLM ("AI") is essentially allowed to create its own patterns.

If I am more or less correct about how they are implemented, LLMs are certainly not "intelligent" in any meaningful way. They are automatons: machines designed to follow a predetermined sequence of operations or respond to encoded instructions (https://www.merriam-webster.com/dictionary/automaton). LLMs run a set of statistical tools supplied by external programmers against a finite set of training data chosen by those programmers (from the works of unwitting and unrecompensed human authors). These tools produce an abstract statistical model of the concrete training data. The LLM then selects the text from its training data that is most likely to "come next" with respect to any given input text, according to the statistical model derived from that training data--a circular, self-contained process.
The above approach presents a couple of significant dangers. First, its statistics-based "understanding" is limited to the content of the training that its programmers have had the foresight to provide. Second, its statistically based equivalents to our matching patterns are unconstrained (see 1 and 2 above). The LLM generates its own rules based on the frequency with which patterns appear in its limited training data, not on what the patterns mean. In other words, the "AI" concerns itself simply with what matches, not with what we must match or must not match.

I suspect that other much-touted "AI" applications work in a fundamentally similar way. Consider self-driving cars. A robocab application that reliably stops for cyclists riding though intersections in daylight drives through and over a cyclist that crosses mid-block at night. An automobile autopilot application that reliably changes lanes on freeways and brakes for stopped vehicles drives through a jacknifed, empty flatbed trailer and decapitates its human passenger. Why? All I can think is that the models derived rules and probabilities from training data that did not include cyclists in the wrong place at night or trailers that do not have loads or boxes on top. The programmers failed to consider these cases or simply used real-world data sets that did not happen to include them. The "AI" followed its training-derived rules. Oops.

The behavior of the human drivers in the two well-known examples I have cited illustrates the real danger of our current AI craze: users and engineers alike tend to believe the CEO drivel and the marketing fairy tales. The robotaxi has a human check driver to prevent accidents. But the car is intelligent and safer than a real driver. Riding with it is so predictably safe thatshe gets bored. She reads a book. BAM. The autopiloting passenger believes his car is intelligent because he paid a lot for it and identifies with the bad-boy plutocrat that owns the company. He loves the latter's over-the-top claims, ignores the fine print in the manual, and hops into the back seat for a nap. BAM.

The above is bad enough when we are talking about cars. Add wings, weapons, nukes, national security responsibilies, and I think there will be problems.
 

Dassault Aviation invests in Harmattan AI at €1.4 billion value​


Pentagon releases new 'AI-first' military strategy​


Space Force taps Slingshot to build AI adversaries for orbital wargames​

 
Last edited:

Taiwan’s Tron Future unveils AI-guided anti-armor rockets​


Future of military AI in Saudi Arabia: AI-enhanced, or AI-native?​

 
Last edited:

Similar threads

Back
Top Bottom