Anduril Industries’ YQ-44A Fury and General Atomics’ YFQ-42A are both “basically ready to go,” a Pentagon official said.

I had always seen the Fury designated as YFQ-44A, not YQ-44A. Is this just a typo or misspeak, or does the Fury no longer merit the F designation?
 
WASHINGTON — The US Navy has awarded contracts to four major aerospace prime contractors — Anduril, Northrop Grumman, Boeing and General Atomics — for “conceptual designs” for a carrier-based autonomous combat drone, according to a Navy document obtained by Breaking Defense.

Additionally, Lockheed Martin is under contract for the drone’s “common control,” according to a slide on Collaborative Combat Aircraft from the Navy’s program executive office for unmanned aviation and strike weapons, dated Aug. 20.
 
The Navy may also be chasing a significantly cheaper price point than the Air Force of about $15 million per plane, compared to the $25 million to $30 million per aircraft cited by former Air Force Secretary Frank Kendall. In April 2024 Rear Adm. Stephen Tedford, the Navy’s program executive for unmanned systems and weapons, said that cheaper price point would enable the service to use a single CCA multiple times for surveillance and strike missions before ending its lifespan as a one-way attack drone.
“I want something that’s going to fly for a couple hundred hours. The last hour it’s either a target or a weapon,” he said then. “But I’m not going to sustain them for 30 years.”
 

… She said initial testing is coming “later this year, and then … a second series of expected testing [will] run in early 2026 to validate some key designs, and this will be an engine family that will be available both domestically and internationally.”

Albertelli withheld most details about the engine but said it is “not the TJ150,” which Pratt builds for the Miniature Air-Launched Decoy and will be used by Leidos for a small cruise missile program. “It is something else that we have been working on for some time and are very excited about.”


Pratt is “seeing strong demand from both the services and international customers for CCAs, and we really have been able to provide solutions that we’re looking to field faster, again, moving at the speed of relevance,” she said.
 
Beehive is focusing initially on test and development of the 200 lb.-thrust Frenzy engine variant. “The 200 lb.[-thrust] engine will go into high altitude testing next, and then we’ll begin testing a 100 lb.-thrust version early next year. Both are targeting a 2026 production start,” a company official tells Aviation Week. Initial flight tests are scheduled for early 2026.Beehive is focusing initially on test and development of the 200 lb.-thrust Frenzy engine variant. “The 200 lb.[-thrust] engine will go into high altitude testing next, and then we’ll begin testing a 100 lb.-thrust version early next year. Both are targeting a 2026 production start,” a company official tells Aviation Week. Initial flight tests are scheduled for early 2026.


 

Air Force advances human-machine teaming with autonomous collaborative platforms​


EGLIN AIR FORCE BASE, Fla. – The U.S. Air Force recently demonstrated a major leap in human-machine teaming, flying autonomous collaborative platforms (ACPs), alongside crewed fighter aircraft during a training event at Eglin Air Force Base, Florida. Pilots operating an F-16C Fighting Falcon and an F-15E Strike Eagle each controlled two XQ-58A Valkyrie aircraft in an air combat training scenario, showcasing real-time integration between manned and semi-autonomous systems.


View: https://x.com/SMART_DoD/status/1970503653690655010
 

EAST HARTFORD, Conn., Sept. 24, 2025 /PRNewswire/ -- Pratt & Whitney, an RTX (NYSE: RTX) business, has completed critical testing on its small turbofan engine family for use on Collaborative Combat Aircraft, or CCAs. Testing confirms that the business can increase thrust on these existing engines for use on CCA platforms.

The engine family, originally designed for commercial aircraft applications, is known for its exceptional performance, reliability and efficiency. Building on these qualities, Pratt & Whitney has unlocked additional capability from the engine to benefit CCA applications, which favor embedded engines that offer maximum maneuverability and range.

"For unmanned applications, our commercial-off-the-shelf engines can offer an up to 20% increase in their qualified thrust capability," said Jill Albertelli, president of Pratt & Whitney's Military Engines business. "This means that we can deliver increased performance from these production engines. Ultimately, this will allow for reduced cost and weight for multiple applications."

A second series of tests is underway, monitoring inlet airflow and pressure variations for engines embedded within the aircraft. When airflow to the engine is interrupted or blocked, there is the potential to impact performance. These tests are pushing those limits, intentionally distorting airflow around the flight envelope to document performance and produce a reliable prediction tool for future installations.

These series of tests, conducted alongside a digital twin model, allow Pratt & Whitney to meet cost, schedule and technical requirements for CCA propulsion while reducing risk to engine integration activities.
 
Relying on software based A.I.s for mission critical applications is extremely risky at the best of times.

Most every modern system is software based. Reliability is determined by testing, and fixing errors detected in testing. Non AI systems are certainly capable of huge data errors due unforeseen events. I was once told a story about an Aegis system displaying a supersonic track racing’s toward its parent ship; it turned out to be a reflection off the moon with such an extreme and unforeseen time delay or Doppler shift that speed calculations went out the window. Perhaps a tall tail, but certainly the experience of the USS Vincennes is real.
 
Most every modern system is software based. Reliability is determined by testing, and fixing errors detected in testing.
There's an issue around software testing when it comes to AI, it simply doesn't have the definitive predictability of classic code. I can set up a set of input s with classic code and know exactly what the results should be. I can't do that with AI, because it decides what the outcome is based on a model of the problem we simply don't fully understand. And then there's hallucinations.
 

For CCA Increment II, the Air Force will award concept refinement contracts potentially to “several” companies imminently. The award is expected by the end of the calendar year or early in fiscal 2026, says Col. Timothy Helfrich, the senior materiel leader for advanced aircraft and director of the service’s Agile Development Office. More than 20 vendors were solicited to compete for the effort, and the exact number of awards will be “dependent on several factors.”

While it has long been expected that the second increment would aim for lower-priced, less-capable systems that could be produced en masse, that decision is not set. Helfrich says his office has been working with the service’s requirements community “and finding out what are the real use cases that will be helpful in filling any potential operational gaps in the future.”

The service has narrowed down a few use cases and initial attributes of what the system could look like.

“There’s potential that it could be high-end capability. There’s potential that it could be low-end capability,” he says. “It really is going to come down to what drives the best cost per effect. It’s less about an individual system, but how they will affect the overall system.”

 
There's an issue around software testing when it comes to AI, it simply doesn't have the definitive predictability of classic code. I can set up a set of input s with classic code and know exactly what the results should be. I can't do that with AI, because it decides what the outcome is based on a model of the problem we simply don't fully understand. And then there's hallucinations.
And almost immediatedly USAF reports that AI can write air strike orders 400 times faster than humans, but some of them are wrong, and it's not glaring errors, but subtle ones that need careful reading to spot.
 
I am curious if these errors involved suboptimal engagements, sources of fire that were unable to hit the target, or something truly catastrophic like engaging friendlies or large numbers of civilians. Those are different levels of failure, and doing something ok ten minutes before doing something perfect probably has a huge amount of value.
 
View: https://www.youtube.com/watch?v=dF_BIVqaudE&t=2s


Not a super fan of this channel, but I've seen him make a point that I haven't really read before - apologies if this is old news to you.
Apparently the air force is looking for a low-end design, and wants to control costs and technology aggressively, and wants to spend as little as possible per drone to hit baseline capability (which does not include stealth). It even chewed out one of the contractors for their drone being too expensive and too stealthy.

It also wants to own the design, and make it possible to contract out manufacturing to civilian companies to have as much of an industrial base capable of making these as possbile, to have the possibility of building a large fleet of them, which they think will be more useful than a handful of highly capable drones.

It's an interesting idea, and I think the US might even have an edge on China in this level (say Cessna C-172 level) manufacturing capability.
 
I am curious if these errors involved suboptimal engagements, sources of fire that were unable to hit the target, or something truly catastrophic like engaging friendlies or large numbers of civilians. Those are different levels of failure, and doing something ok ten minutes before doing something perfect probably has a huge amount of value.

"While he didn’t go into details, he said the errors were not blatant but subtle: more along the lines of failing to factor in the right kind of sensor for specific weather conditions, rather than trying to send tanks on air missions or put glue on pizza. (Of course subtle errors are harder to catch and require more expertise for a human to correct.)"
 

"While he didn’t go into details, he said the errors were not blatant but subtle: more along the lines of failing to factor in the right kind of sensor for specific weather conditions, rather than trying to send tanks on air missions or put glue on pizza. (Of course subtle errors are harder to catch and require more expertise for a human to correct.)"

That seems like modest failure that might be perfectly acceptable given the general advantages of speed of engagement. To put it another way: some engagements were suboptimal or unsuccessful but all engagements took place much faster such that the total effort likely was still far more effective than slower meticulous human management. Though presumably they will continue to improve the AI parameters to make it more cognizant of those constraints.
 
That seems like modest failure that might be perfectly acceptable given the general advantages of speed of engagement.
If it's sending a laser designation platform armed with LGBs, and there's complete cloud cover, or thick fog, then it's a definite problem.
To put it another way: some engagements were suboptimal or unsuccessful but all engagements took place much faster such that the total effort likely was still far more effective than slower meticulous human management.
Being able to generate a strike plan in 0.125 seconds (vs several minutes) doesn't mean you can perform the strike itself any faster.

Though presumably they will continue to improve the AI parameters to make it more cognizant of those constraints.
It's probably a learning issue at this point in time, but we learn so much through unconscious learning from those around us that quantifying that learning so it all comes from explicit lessons instead is difficult. We can undoubtedly patch over the obvious omissions quickly, but the subtle ones are going to lurk unhidden until they emerge to bite you.
 
It also wants to own the design, and make it possible to contract out manufacturing to civilian companies to have as much of an industrial base capable of making these as possbile, to have the possibility of building a large fleet of them, which they think will be more useful than a handful of highly capable drones.
That's well..good but why would you take Increment 1 (2 or 3 for that matter), and have multiple commercial OEMs produce it? Lockheed and its suppliers by itself produce north of 150 F-35's, and was originally planning to produce 200 or so each year. CCA increment 1 appears to be a 1,000 aircraft program as stated by the then US SecAF. A few hundred copies from each of the two OEMs should do the trick and wouldnt' really need a very dramatic call to industry to be able to meet need. The only reason you would need to rope in the Ford's, GM's of the world on top of every other aerospace company out there would be if you needed tens of thousands of these things. But then, that would probably be for a smaller Speed Racer type system in the single digit millions (missionized).
 
Last edited:
If it's sending a laser designation platform armed with LGBs, and there's complete cloud cover, or thick fog, then it's a definite problem.

Being able to generate a strike plan in 0.125 seconds (vs several minutes) doesn't mean you can perform the strike itself any faster.
It does.

Another article on the same exact story said that it takes a human 12 minutes to generate a single Course of Action. At the very minimum, AI generates a set of possible courses of actions in a few seconds and the human just needs to either make changes or make choices. In the best case, the choice is made in a few seconds and the targeting loop is completed faster. In worse cases, it takes 5 minutes to make corrections and send it out. At the very worst, the full 12 minutes.
 
It does.

Another article on the same exact story said that it takes a human 12 minutes to generate a single Course of Action. At the very minimum, AI generates a set of possible courses of actions in a few seconds and the human just needs to either make changes or make choices. In the best case, the choice is made in a few seconds and the targeting loop is completed faster. In worse cases, it takes 5 minutes to make corrections and send it out. At the very worst, the full 12 minutes.
You're forgetting that someone needs to sit and review it, so the advantage is going to be a lot less, and if it's going into a complete Air Tasking Order for tomorrow's missions, tomorrow won't get here any faster.

"The AI generated 1.25 COAs every second, the humans generated one COA every 5.3 minutes"

So best saving probably about 4 minutes, though I suspect if you need to read and understand all the issues it's probably not going to be significantly shorter than the 5.3 minutes of just doing it yourself.
 
You're forgetting that someone needs to sit and review it, so the advantage is going to be a lot less

So best saving probably about 4 minutes, though I suspect if you need to read and understand all the issues it's probably not going to be significantly shorter than the 5.3 minutes of just doing it yourself.c
Does it take the same amount of time for you to look at a hand held map to figure out where you are in real time compared to looking at google maps to verify if where you are is correct? Does it take the same amount of time to write your own code vs reading the code and correcting a few lines from what someone else gave you? You aren't even doing any data inputting here. That data is getting fed into the system in real time. Not sure how reviewing something will take just as long as doing from scratch on the average case.

It doesn't matter that AI screws up sometimes. You always check AI no matter if it's right or wrong. The saved time here isn't just a few minutes of trivial time. It adds up across hundreds of cases done and it reduces operator exertion during that time. So no - I don't agree that the advantage is a lot less.
"The AI generated 1.25 COAs every second, the humans generated one COA every 5.3 minutes"

Excerpt:

An AI algorithm was able to generate a COA in 10 seconds, compared to 16 minutes for a human, but “they weren’t necessarily completely viable COAs,” said Maj. Gen. Robert Claude, Space Force representative to the ABMS Cross-Functional Team.
 
Last edited:
Does it take the same amount of time to write your own code vs reading the code and correcting a few lines from what someone else gave you?
Having done this for real on other people's safety critical code, if the assumptions aren't laid out clearly it can take forever to figure out what the hell's going on. One time I had to dig deeper into some code and found an entire undocumented channel of redundancy cancelling out the changes I'd made in the channel we knew about.

Remember, we're talking 5 minutes to understand the mission, then understand what platforms the AI has assigned and what weapons, and figure out if they can get the job done and get it done safely. It could seriously be quicker to generate your own solution if the AI's approach is non-obvious.
 
There's an issue around software testing when it comes to AI, it simply doesn't have the definitive predictability of classic code. I can set up a set of input s with classic code and know exactly what the results should be. I can't do that with AI, because it decides what the outcome is based on a model of the problem we simply don't fully understand. And then there's hallucinations.
Due to the lack of definitive predictability, I was always wondering how AI code is used in safety critical applications. And as far as I know, this is the reason AI is not certifiable (I'm not a software guy).
 
I suppose you had to excise all of that code?
Nope, had to figure out how it worked, and implement the changes into it as well, because I didn't know what else it affected. Which took more time than had been allocated for the task, which resulted in a pissed off manager blaming it on me. Which seemed to happen a lot on his project, and not just to me.
 
You keep harping on the worst case, the edge case, the problem cases, but that's not the problems that AI is trying to solve is it now? For AI to be useful, it just has to be right for the majority of cases to reduce the workload for the human operator and allow the human to focus on the complex / wrong cases. If it can't achieve that, it's a failure of an AI.
Having done this for real on other people's safety critical code, if the assumptions aren't laid out clearly it can take forever to figure out what the hell's going on. One time I had to dig deeper into some code and found an entire undocumented channel of redundancy cancelling out the changes I'd made in the channel we knew about.
Yeah I've had that happen to me too, but that's one time (or 10 / 100 times). It's not the regular case at all is it? What we are talking about here isn't the 10 / 100 times. It's the 90 / 100 times.
Remember, we're talking 5 minutes to understand the mission, then understand what platforms the AI has assigned and what weapons, and figure out if they can get the job done and get it done safely.
If someone has been sitting at their desk monitoring things unfolding, do they really have to start the entire process from scratch? Pretty sure in the general case you would already have a decent idea of whats going on and whats needed to make a decision quickly.
It could seriously be quicker to generate your own solution if the AI's approach is non-obvious.
Sure. But again - 10 / 100 times or 90 / 100 times? Across how long? Human cost?
 
You keep harping on the worst case, the edge case, the problem cases,
Actually I've been talking about the typical case, with the exception of the code anecdote (and of course I didn't know that wasn't a typical case when I started). If the typical time to work out what's needed is five minutes, I don't believe you're going to be a lot faster looking at what the AI's proposing and analysing target, defences, assigned aircraft and assigned weapon, together with any environmental factors to confirm they're all reasonable.
 
Actually I've been talking about the typical case.
No you have not. You had been (and continue to) harp on the errors that the AI made, which were already not the typical case.
If the typical time to work out what's needed is five minutes
We've been through this. It is not.

The typical case was stated to be 16 minutes in my article and 12 in yours. I proposed 5 minutes as what it takes to evaluate the result that AI gave.
I don't believe you're going to be a lot faster looking at what the AI's proposing and analysing target, defences, assigned aircraft and assigned weapon, together with any environmental factors to confirm they're all reasonable.
You don't believe, but the math you propose doesn't add up unless your idea of working with AI is to ignore it and do everything over again.

The coding example is you using a 10% case to try and disprove my 90% case. On the average, 90% of the time pull request, does it take you longer to review the code or to write the code yourself?
 

Similar threads

Back
Top Bottom