• Hi Guest! Forum rules have been updated. All users please read here.

USAF/US NAVY 6th Generation Fighter Programs - F/A-XX, F-X, NGAD, PCA

Hood

CLEARANCE: Top Secret
Senior Member
Joined
Sep 6, 2006
Messages
1,678
Reaction score
858
You have to wonder how much the analysts and staff officers of today actually understand about historical programmes.
The Century series were built en mass, only two grossed under 800 airframes; around 300F-104s for the USAF and 342 F-106. So they could back up multiple types with some numbers for spreading production and service costs.
Of course four of the Century series were cancelled outright (F-103, F-107, F-108, F-109) and F-106 was an extension of the F-102, so really the Century series were a mixed bunch at best and designed for a range of roles and were remarkably adaptable for other roles, quite fortuitously for some of them. They also covered the outputs of five (six if you count the Bell F-109) companies, so there was research and development in depth.

All this sounds much like open-architecture, which I thought had been around for the last 20-30 years? Aircraft like the F-15, F-16 and F/A-18 have already accepted everything thrown at them in terms of new engines, avionics and materials. They are nothing like their A-models. I thought modern aircraft were always designed this way, to accept new subsystems with relatively little reworking? What this actually sounds more like is building monopolies so that certain companies will always supply certain items of equipment for the entire inventory.

The big worry of course is that the modern consumer industry loves 'built-in obsolesce' and talk of a digital-century series built on small increments with scrapping rather than upgrading could well lead to this if they are not careful.
 

sferrin

CLEARANCE: Above Top Secret
Senior Member
Joined
Jun 3, 2011
Messages
13,040
Reaction score
1,079
You have to wonder how much the analysts and staff officers of today actually understand about historical programmes.
The Century series were built en mass, only two grossed under 800 airframes; around 300F-104s
More like 2,578, the most numerous of the Century series. ;)
 

sublight is back

CLEARANCE: Top Secret
Senior Member
Joined
Aug 25, 2012
Messages
751
Reaction score
31
Say what you will about the tomcat versus eagle debate, I think the know-how exits today to build a somewhat common aircraft for both services.
Yes, but unfortunately there are the behind the scenes politics, the inter-service rivalries, and the personal vendettas which are the ultimate slient weapon, and can destroy any good program or platform in its path. The only way around this for F35 was to have suppliers in almost every congressional district of the USA.
 

sferrin

CLEARANCE: Above Top Secret
Senior Member
Joined
Jun 3, 2011
Messages
13,040
Reaction score
1,079

mkellytx

CLEARANCE: Confidential
Joined
Sep 18, 2009
Messages
62
Reaction score
22
More like 2,578, the most numerous of the Century series. ;)
Phantom production ran from 1958 to 1981, with a total of 5,195 built, making it the most produced American supersonic military aircraft.
Wiki (because 4=110 back in the time)
The F-4 wasn't part of the Century series anymore than the F-111 or F-117 were.
Airframe: First Flight: Service Entry:
F-100 1953 1954
F-101 1954 1957
F-102 1953 1956
F-104 1954 1958
F-105B 1956 1958
F-107 1956 N/A
F4H-1/F-110 1958 1960

One could argue... It was only 2 years separate from the Thud and Zipper, and contemporaneous with D model Thud.
 

sferrin

CLEARANCE: Above Top Secret
Senior Member
Joined
Jun 3, 2011
Messages
13,040
Reaction score
1,079
More like 2,578, the most numerous of the Century series. ;)
Phantom production ran from 1958 to 1981, with a total of 5,195 built, making it the most produced American supersonic military aircraft.
Wiki (because 4=110 back in the time)
The F-4 wasn't part of the Century series anymore than the F-111 or F-117 were.
Airframe: First Flight: Service Entry:
F-100 1953 1954
F-101 1954 1957
F-102 1953 1956
F-104 1954 1958
F-105B 1956 1958
F-107 1956 N/A
F4H-1/F-110 1958 1960

One could argue... It was only 2 years separate from the Thud and Zipper, and contemporaneous with D model Thud.
Uh, it was a NAVY plane. The Century series were USAF. The only thing "Century Series" about the F-4 was the painted "F-110" slapped on the side for a bit. May as well call the MiG-21 a Century Series aircraft too because the USAF called it the YF-110 as well:

 

Lc89

CLEARANCE: Confidential
Joined
Aug 10, 2019
Messages
109
Reaction score
35
Couldn't they use the piece cannibalization method for Digital Century Series? If there is commonality between platforms, it could save money.
 

bobbymike

CLEARANCE: Above Top Secret
Joined
Apr 21, 2009
Messages
9,828
Reaction score
900
———————————————————
But HASC intends to fence off 85 percent of the fiscal 2021 funding requested for the NGAD until the committee receives an independent review performed by the Pentagon’s director of cost assessment and program evaluation, according to the Tactical Air and Land Forces Subcommittee’s markup of the FY21 defense policy bill.
 

dark sidius

CLEARANCE: Secret
Joined
Aug 1, 2008
Messages
392
Reaction score
8
———————————————————
But HASC intends to fence off 85 percent of the fiscal 2021 funding requested for the NGAD until the committee receives an independent review performed by the Pentagon’s director of cost assessment and program evaluation, according to the Tactical Air and Land Forces Subcommittee’s markup of the FY21 defense policy bill.
Is it a normal process or something bad for the futur program ?
 

latenlazy

I really should change my personal text
Joined
Jul 4, 2011
Messages
213
Reaction score
4
@BDF : I think you forgot to factor-in the effect of having shared sub-systems and software across different airframe. That will translate into substantial cross-fleet economies. IMOHO you should add a K factor into the above equation and add fleet numbers:

With Ni the N number of airframe of type i fleet, TOC = $160M * K*Sqrt(N1+N2+...+Nn).
With K a function of the level of cross-integration of sub-systems (K>1 and lim(K) =1 when integration is fully optimal). K is an indice of Quality.
This all sounds good in theory until you realize the standardized frameworks and interfaces you need to get your modularizable solution can end up imposing their own performance constraints, especially for hardware with a lot of integrated dependencies. There are serious problems in trying to adapt software development models, which tend to benefit from open ended and adaptive development paths for meeting performance parameters (both as a function of their level of abstraction and in no small part thanks to excess of computational power now available), as well as fast iterative workflows (updating code is a much less burdensome task than tinkering with physical hardware) to hardware domains where constraints are physically based and hitting performance targets require specific integration dependencies that are closed looped.

Often times what ends up happening when these models from software engineering are applied too zealously to hardware is that you end up compounding the cost problem when you find yourself spending time not so much updating components as you are wasting time trying to update the original frameworks to enable greater capability for future technology that works differently from the protocols that the framework was originally configured for. Modular iterative approaches are essentially vulnerable to greater technical debt that accumulates as the framework ages, and this technical debt tends to be a lot more costly when what you’re trying to fit together are complex physical objects rather than abstract logic represented by lines of code. This doesn’t mean that these kinds of iterative open ended program concepts can’t work, but there are a lot of ways they can go horribly wrong, amplifying the time, performance, and resource problems that were originally meant to be solved by its adoption.
 

jsport

I really should change my personal text
Joined
Jul 27, 2011
Messages
1,731
Reaction score
121
@BDF : I think you forgot to factor-in the effect of having shared sub-systems and software across different airframe. That will translate into substantial cross-fleet economies. IMOHO you should add a K factor into the above equation and add fleet numbers:

With Ni the N number of airframe of type i fleet, TOC = $160M * K*Sqrt(N1+N2+...+Nn).
With K a function of the level of cross-integration of sub-systems (K>1 and lim(K) =1 when integration is fully optimal). K is an indice of Quality.
This all sounds good in theory until you realize the standardized frameworks and interfaces you need to get your modularizable solution can end up imposing their own performance constraints, especially for hardware with a lot of integrated dependencies. There are serious problems in trying to adapt software development models, which tend to benefit from open ended and adaptive development paths for meeting performance parameters (both as a function of their level of abstraction and in no small part thanks to excess of computational power now available), as well as fast iterative workflows (updating code is a much less burdensome task than tinkering with physical hardware) to hardware domains where constraints are physically based and hitting performance targets require specific integration dependencies that are closed looped.

Often times what ends up happening when these models from software engineering are applied too zealously to hardware is that you end up compounding the cost problem when you find yourself spending time not so much updating components as you are wasting time trying to update the original frameworks to enable greater capability for future technology that works differently from the protocols that the framework was originally configured for. Modular iterative approaches are essentially vulnerable to greater technical debt that accumulates as the framework ages, and this technical debt tends to be a lot more costly when what you’re trying to fit together are complex physical objects rather than abstract logic represented by lines of code. This doesn’t mean that these kinds of iterative open ended program concepts can’t work, but there are a lot of ways they can go horribly wrong, amplifying the time, performance, and resource problems that were originally meant to be solved by its adoption.
after reading this, it once again becomes a big issue...Why manned craft adding continual complication and cost?
 

latenlazy

I really should change my personal text
Joined
Jul 4, 2011
Messages
213
Reaction score
4
@BDF : I think you forgot to factor-in the effect of having shared sub-systems and software across different airframe. That will translate into substantial cross-fleet economies. IMOHO you should add a K factor into the above equation and add fleet numbers:

With Ni the N number of airframe of type i fleet, TOC = $160M * K*Sqrt(N1+N2+...+Nn).
With K a function of the level of cross-integration of sub-systems (K>1 and lim(K) =1 when integration is fully optimal). K is an indice of Quality.
This all sounds good in theory until you realize the standardized frameworks and interfaces you need to get your modularizable solution can end up imposing their own performance constraints, especially for hardware with a lot of integrated dependencies. There are serious problems in trying to adapt software development models, which tend to benefit from open ended and adaptive development paths for meeting performance parameters (both as a function of their level of abstraction and in no small part thanks to excess of computational power now available), as well as fast iterative workflows (updating code is a much less burdensome task than tinkering with physical hardware) to hardware domains where constraints are physically based and hitting performance targets require specific integration dependencies that are closed looped.

Often times what ends up happening when these models from software engineering are applied too zealously to hardware is that you end up compounding the cost problem when you find yourself spending time not so much updating components as you are wasting time trying to update the original frameworks to enable greater capability for future technology that works differently from the protocols that the framework was originally configured for. Modular iterative approaches are essentially vulnerable to greater technical debt that accumulates as the framework ages, and this technical debt tends to be a lot more costly when what you’re trying to fit together are complex physical objects rather than abstract logic represented by lines of code. This doesn’t mean that these kinds of iterative open ended program concepts can’t work, but there are a lot of ways they can go horribly wrong, amplifying the time, performance, and resource problems that were originally meant to be solved by its adoption.
after reading this, it once again becomes a big issue...Why manned craft adding continual complication and cost?
Part of the issue is that the component technologies themselves have become more complicated. The more complicated your component technology the more the complications of their integration dependencies multiply by an order of magnitude. Frankly, I think if a modular solution is what you want, it makes more sense to unpack one complex integrated platform into a bunch of distribute and delegated platforms, than to welcome in the contradiction of trying to make a complex integrated platform more modular in its internal construction and design. Alternatively, you can also focus on simplifying your component technologies rather than modularizing your integrated platform. No amount of modularization will make your integrated platform more adaptable if the components that are being integrated only increase in complexity. A lot of what goes into making these kinds of products more program efficient and effective really comes down to managing complexity rather than increasing iteration or modularity.
 
Last edited:

sferrin

CLEARANCE: Above Top Secret
Senior Member
Joined
Jun 3, 2011
Messages
13,040
Reaction score
1,079
This all sounds good in theory until you realize the standardized frameworks and interfaces you need to get your modularizable solution can end up imposing their own performance constraints, especially for hardware with a lot of integrated dependencies.
Such as? How would standardized interfaces between engine and airframe impose performance constraints? Between sensor and aircraft?
 

latenlazy

I really should change my personal text
Joined
Jul 4, 2011
Messages
213
Reaction score
4
This all sounds good in theory until you realize the standardized frameworks and interfaces you need to get your modularizable solution can end up imposing their own performance constraints, especially for hardware with a lot of integrated dependencies.
Such as? How would standardized interfaces between engine and airframe impose performance constraints? Between sensor and aircraft?
Depends on who’s controlling development of the component systems. Let’s say you standardize the engine diameter in order to avoid having to make modifications to the bulkheads and the arrangement of interior compartments. Suddenly new requirements in a round of future iterations make a wider diameter engine the natural way to go. Now you have to choose between 1) accepting less capability, 2) reworking the airframe, 3) finding a different development route to reach those requirements, all of which end up making your life harder rather than easier.

If you control the *whole* stack of technology and fully develop multiple potential pipeline paths for component systems then you can probably enforce a standardized interface that would be free of these compatibility and dependency risks. If you don’t you’re not really saving yourself more on difficulty and cost, and perhaps even making iteration more difficult and costly because you’re now engineering with an extra set of constraints. But if you are planning that far ahead you might be locking yourself out from some emerging tech as well.

I’m not proposing that this is black and white, just pointing out that the viability and efficacy of this sort of batch iterative process has a lot of extra caveats and conditions that you need to get right in order to make it work, and there are a lot of ways this could go wrong. Organization and program design are still going to matter a whole lot and if there’s no mind paid for the ways this kind of novel product development model can go sideways things can get really ugly really fast. A case example of taking modularity for granted and trying to play the product development game too clever by half, not paying heed to the complex realities of inherent integration dependencies, is the 737 MAX MCAS debacle. Novel product development models always sound like gravy until you get into the thicket of details.
 

Mark S.

CLEARANCE: Confidential
Joined
Feb 5, 2011
Messages
121
Reaction score
25
Although not the greatest analogy auto assembly plants are hugely complex systems utilizing automation, robotics humans and information systems to produce cars routinely 240 days a year. It takes interfacing all the systems they contain to produce the flow thru the plant and the outcome, a new vehicle, out the door. By the standards employed by the auto industry the design of a new aircraft would be easy. Systems constantly change and there is no magic control of that. It just takes good old fashioned engineering. Over the years design teams amass the "institutional knowledge" that reduces their chances of going down the wrong path. It may seem daunting to an individual who has not done sophisticated engineering but for the many who have it's another day at the office. The key isn't so much the system but the individuals that know how to put it all together and the only way you get there is thru experience.
 

jsport

I really should change my personal text
Joined
Jul 27, 2011
Messages
1,731
Reaction score
121
"Depends on who’s controlling development " yes as the suspicion of which agendas are at play is always more important than the technology. B-17s where massed produced quickly during WWII and one would assume sped up assembly could evolve quickly even in the modern complex context if there where intent.

Engine diameter is another argument for the optimum baseline craft, for instance. Most efficient Intake size would seem to be an easy calculation for multi-mission baseline for next 20yrs... A"peer reviewed Plato's perfect form" capability/size, lift to weight, endurance/speed, multi-mission baseline aircraft could easily be calculated. Internal volumn for stealth and the gun(s) options would seem to be key also.
 
Last edited:

latenlazy

I really should change my personal text
Joined
Jul 4, 2011
Messages
213
Reaction score
4
Although not the greatest analogy auto assembly plants are hugely complex systems utilizing automation, robotics humans and information systems to produce cars routinely 240 days a year. It takes interfacing all the systems they contain to produce the flow thru the plant and the outcome, a new vehicle, out the door. By the standards employed by the auto industry the design of a new aircraft would be easy. Systems constantly change and there is no magic control of that. It just takes good old fashioned engineering. Over the years design teams amass the "institutional knowledge" that reduces their chances of going down the wrong path. It may seem daunting to an individual who has not done sophisticated engineering but for the many who have it's another day at the office. The key isn't so much the system but the individuals that know how to put it all together and the only way you get there is thru experience.
Modern military aircraft are far more complex pieces of technology than cars. Cars don't even need deep software integration to operate all their basic functions. Furthermore, your "institutional knowledge" is only as good as the last iteration of your design.
 

sferrin

CLEARANCE: Above Top Secret
Senior Member
Joined
Jun 3, 2011
Messages
13,040
Reaction score
1,079
This all sounds good in theory until you realize the standardized frameworks and interfaces you need to get your modularizable solution can end up imposing their own performance constraints, especially for hardware with a lot of integrated dependencies.
Such as? How would standardized interfaces between engine and airframe impose performance constraints? Between sensor and aircraft?
Depends on who’s controlling development of the component systems. Let’s say you standardize the engine diameter in order to avoid having to make modifications to the bulkheads and the arrangement of interior compartments. Suddenly new requirements in a round of future iterations make a wider diameter engine the natural way to go. Now you have to choose between 1) accepting less capability, 2) reworking the airframe, 3) finding a different development route to reach those requirements, all of which end up making your life harder rather than easier.
Not necessarily. Standardizing interfaces would, almost by definition, include a degree of future-proofing. You're not going to suddenly decide to put a 6 foot diameter engine in a fighter so that would almost be trivial.
 

jsport

I really should change my personal text
Joined
Jul 27, 2011
Messages
1,731
Reaction score
121
"Lethality" and "business case" says Roper but what ever happened to 'Full Spectrum Effects'. If these things are as expensive and remain manned they need to do a little bit of just about everything in High Intensity Conflict.
 

TomcatViP

Hellcat
Joined
Feb 12, 2017
Messages
1,621
Reaction score
422
DARPA ACE AI soars as an Albatros:
The overall focus of ACE is to develop and measure human trust in artificial intelligence (AI). The technologies developed within the ACE program will ultimately enable future pilots to confidently offload some high workload tactical tasks like visual air-to-air engagements so they can better focus on managing the larger battlespace.

Under this contract Calspan Flight Research will modify up to four Aero Vodochody L-39 Albatros jet trainers with Calspan’s proprietary autonomous fly-by-wire flight control system technology to allow implementation and demonstration of advanced Human Machine Interfaces (HMI) and AI algorithms. Flight tests and demonstrations will be conducted from the Calspan Flight Research Facility at the Niagara Falls, NY, International Airport and flown in the Misty Military Operating Area (MOA) over nearby Lake Ontario.

“Calspan is proud of our selection by DARPA to build an airborne air combat experimentation lab for the ACE program,” said Peter Sauer, Calspan President. Louis Knotts, Calspan Owner and CEO added “Since 1947, Calspan has been the world’s premier innovator, developer, and operator of in-flight simulators and UAV surrogates. This program presents an outstanding opportunity for Calspan to partner with DARPA for the use of our programmable flight control technology and provide them with a safe and flexible means to flight test these advanced algorithms.”
 

Grey Havoc

The path not taken.
Senior Member
Joined
Oct 9, 2009
Messages
11,213
Reaction score
1,488
"Lethality" and "business case" says Roper but what ever happened to 'Full Spectrum Effects'. If these things are as expensive and remain manned they need to do a little bit of just about everything in High Intensity Conflict.
Corporate style thinking is indeed one of the reasons that the defenses of the United States are in such a mess.
 

iverson

CLEARANCE: Secret
Joined
Sep 24, 2009
Messages
281
Reaction score
82
Corporate style thinking is indeed one of the reasons that the defenses of the United States are in such a mess.
Imagine if defense contractors weren't publicly owned.
Privately owned companies.... What a threat? lol
I didn't see anything said about "ownership". Grey Havoc remarked on "corporate thinking", which, as any rational person who has worked within a large modern corporation knows, is not thinking at all, at least not in any good sense.

"Corporate thinking" incorporates all the shortcomings of collective group-think but replaces the latter's commitment to democracy and social solidarity with naked self interest. NOT corporate interest--self interest, the interests of executive management and those determined to kiss up and kick down in the generally futile hope of replacing those above them. Corporate think sees business as a zero-sum game, where the all important CEO and his cronies can only win to the extent that others lose. So it prioritizes corporate politics over economic reality. It seeks short-term gains that make internal stock options profitable over sustainable, year over year profits or investment in the company. Monopoly is more profitable than competition, so it makes "sense" to buy competitors instead of developing new and better products of its own.

Defense corporations are probably worse than normal in this respect, because they generally have only one essential customer--a sovereign state in which they are incorporated and that has to use their products. So requirements can, in most cases, be ignored. The corporation just needs to cozy up to the right government officials and promise cushy, high-paying, post-retirement jobs/consultancies to selected generals, admirals, bureaucrats, and/or politicians. Sometimes, corporate think simplifies things even further and just pays bribes (Cunningham, the Hunters, and many other examples).

Corporate think can accept all of the above while trying to make its non-executive-track employees believe the exact opposite. Mere workers get business ethics classes that say that the company's high ethics forbid exactly what it does--at least in the case of mere workers. They enjoy the privilege of attending workshops, reading email from the CEO, and seeing posters in the cafeteria, all proclaiming the new quality management methodology. These all rest on the assertion that quality and innovation do not depend on investment in R&D, tooling, or even quality inspectors. Quality is not a characteristic of the things the company makes. Instead, quality is something happens when we embrace methodology and learn and use its terminology (and acronyms): Total Quality Management (TQM), Six Sigma (the one that corporatist demigod Jack Welch used to make his fortune while driving the reliably profitable GE to the brink of bankruptcy), SMART goals, Agile, and many many more that I've thankfully forgotten. If quality problems emerge thereafter, someone didn't use the right wording (or possibly colors) in their PowerPoint slide.

No, at this point, private ownership has nothing to do with the state of the defense industry or of the economy in general. Remember the USSR? The much derided centrally planned economy? The Five-Year Plans? It was a corporation managed by boards (soviets) and a chairman who supposedly ran the economy for the benefit of the owners (the people) but instead made a good thing out of it for themselves by issuing Plans instead of making stuff the owners needed. Who does or does not own resources is immaterial. What counts is control. In the Soviet state that was, the neo-Soviet/mafia state that replaced it, in the US, the UK, or most of Europe, control has long since fallen into the hands of unelected bureaucrats. If you happen to own shares in a major corporation, ask yourself, how much control does that give you? Whether you vote your shares at the annual meeting, don't vote, or don't even own shares, your effect on current corporate governance is the same: nil, just as when Russians voted for the Chairman and Central Committee of the Soviet-era Communist Party. In both systems, you get the slate of officers and members recommended by the current officers and members The fact that said officers and members no longer draw government salaries the way that they did under Stalin does not seem significant to me.

If more people actually read Adam Smith and David Ricardo, these myths about ownership would go away.
 
Top