Depends. Will the neural network fly it's combat missions with a perfect record before the US hands over the entirety of it's nuclear deterrent to it?What could possibly go wrong with a killer-AI?
Will the neural network fly it's combat missions with a perfect record before the US hands over the entirety of it's nuclear deterrent to it?
Will the neural network fly it's combat missions with a perfect record before the US hands over the entirety of it's nuclear deterrent to it?
And that truly scares me, the thought that an AI computer is in charge of the US nuclear deterrent. Nope, I hope that never happens.
Will the neural network fly it's combat missions with a perfect record before the US hands over the entirety of it's nuclear deterrent to it?
And that truly scares me, the thought that an AI computer is in charge of the US nuclear deterrent. Nope, I hope that never happens.
Nothing to worry about. Just have it play tic-tac-toe until it decides that nuclear war is futile and gives up.
(noughts and crosses to our British cousins)
Will the neural network fly it's combat missions with a perfect record before the US hands over the entirety of it's nuclear deterrent to it?
And that truly scares me, the thought that an AI computer is in charge of the US nuclear deterrent. Nope, I hope that never happens.
Nothing to worry about. Just have it play tic-tac-toe until it decides that nuclear war is futile and gives up.
(noughts and crosses to our British cousins)
Been watching WarGames too many times TomS?
Come on, what's the worst that can happen? Some technicians trying to pull the plug once the network becomes self-aware, and the network initiating a nuclear exchange?Will the neural network fly it's combat missions with a perfect record before the US hands over the entirety of it's nuclear deterrent to it?
And that truly scares me, the thought that an AI computer is in charge of the US nuclear deterrent. Nope, I hope that never happens.
And, hopefully, you realize that USAF tests are often notoriously biased towards the currently fashionable, big-budget option--always have been. When tests do not produce the intended result, moreover, all the services have a tendency to stop them, change the rules, and try again until they do.You do realize there has been an actual AI vs human dogfight test by the USAF and that the AI won all five times, right? AIs now can beat every chess master. How they beat every chess master isn’t particularly relevant. In an era of terabyte thumb drives I’m confident every piece of aerial combat history can cheaply reside in any given drone.
But that isn’t even probably where loyal wingman is going initially. It seems far more likely to me that they will act as stand off sensor and EW platforms that have a much less demanding role of holding formation forward of the manned aircraft and providing target info, cover jamming, and if necessary, serve as decoys. They might also have a short range A2A capability eventually but I suspect initially their role will be more conservative. This is easily within the capability of current tech…an AI with a MADL will be given a behavior directive by the manned platform (recon/decoy/pit bull, etc) and it will operate within those directives even if the link is cut. This isn’t as challenging as being a stand alone offensive platform with no human input; it’s basically just a combat Rumba.
It totally has happened, the loyal wing man concept is exactly what entrenched flyboys would pitch given threats from technology.And, hopefully, you realize that USAF tests are often notoriously biased towards the currently fashionable, big-budget option--always have been. When tests do not produce the intended result, moreover, all the services have a tendency to stop them, change the rules, and try again until they do.
The "loyal wing man" concept itself may be just such a politically motivated attempt at institutional self-protection. Politicians and vendors trumpet the potential of remote- and software-controlled drones as cheaper, politically less sensitive replacements for manned aircraft. So the traditional air force flyboys coopt the technology and write a requirement that makes it a mere adjunct to the flesh-and-blood aviator.
If you read my remarks at all carefully, I can hardly be called a Luddite or a technophobe. I make my living from computer technology.There is plenty of truth in the overselling of “AI” and the misleading presentation of greater autonomy as artificial thinking.
However a lot of the other comments above appears to be little more than technophobia misrepresented as something more reasoned and reasonable.
Not all technological change is good. Sometimes technological change is rushed when it’s not entirely ready. Some (most?) technological changes will prove to have pros and cons that evolve over time (as does the technology).
But a Luddite position that all technological change is inherently and unavoidably bad is unconnected to history or reality.
Anything done poorly will almost certainly perform poorly.
Any UCAV that is implemented with poor conception and implementation around what it is for and what it can actually do is clearly not going to do well.
But you can equally say the same thing about manned aircraft who are (almost) equally built around and are entirely reliant on much the same advanced technology.
And the argument that an unmanned “loyal wingman” is being sold as superior to a manned one is equally a straw-man argument.
It’s not being sold as superior in performance and flexibility versus its manned equivalent (it’s not) - it’s being sold as cheaper and more expendable - to help the manned platform survive and undertake its task rather than seeing more manned platforms shot down and pilots killed. It can be risked closer to threats etc. than airforces will be willing to send their manned aircraft.
It may well be that this initial generation of loyal wingmen may be relatively limited in their capabilities and not live up to their current hype and be bought in relatively small numbers. However as long as they are implemented and used within what they do offer (and they’re not incorrectly prioritised and/ or deployed) then they can help lead to subsequent generations of increasingly capable unmanned combat aircraft. The associated technology is not getting un-invented any time soon.
Indeed. Ukraine's Turkish-made Bayraktars seem to be little more sophisticated than cutting-edge hobbyist equipment. They have a "ludicrously" small payload. Yet they have been perhaps the most successful combat drones in history, while operating in the face of the much vaunted air defenses of the West's most sophisticated opponent. Actual, quadcopter hobbyist drones have proved decisive for artillery spotting and scouting for tank hunting teams. Some have even been used as ultralight bombers.I don't think the military really needs to get surgical with AI controlled assets. Simply overwhelming an enemy force with expendable drones is enough to do the job.
There is a non silly side to this. If you are on the receiving end of an "AI" mediated friendly fire incident or find yourself colliding with a "loyal wing man", it might as well be nuclear from your point of view. Skynet presumed a malevolent intelligence. But what if the "AI" in question is not intelligent--only presumed to be--and is thus just a machine that can on the fritz, like your office thermostat. Do you really want it to have responsibilities?Come on, what's the worst that can happen? Some technicians trying to pull the plug once the network becomes self-aware, and the network initiating a nuclear exchange?Will the neural network fly it's combat missions with a perfect record before the US hands over the entirety of it's nuclear deterrent to it?
And that truly scares me, the thought that an AI computer is in charge of the US nuclear deterrent. Nope, I hope that never happens.
Nonsense.
I suggest we call it Skynet.
One question: what is the "the profiling of all human aviators"? How is it done? What attributes, methods, and parameters do you include? How, for example, do you measure "skill" in order to differentiate it? What is "skill" in this context? What units, instruments, and protocols do you use when doing the measuring? Are we counting G-tolerance? eyesight aerobatic ability? ability to calculate fuel burn? navigational skill? Tactics? Strategy" Knowledge of rules of engagement/military law/international law? Good judgment? And, fi we are, how do we balance them against each other when arriving at a "profile"? Are the units and measuremtn methods appropriate to each common to all?The potential for hyper fined grained, completely centralized campaign also enable absurdly fine and long time frame considerations that can be brute force into being with stupid amount of compute to solve extensive game theory problems.
For example, I expect the profiling of all human aviators (if skill differential is notable) and systems that enable real time identification via non-cooperative means. There'd be tactical "interactions" to collect this info and other things, and considerations in defeating/neutralizing each "human constraint" would be part of the combat model.
No one has come up with a reasonable definition of "intelligence".
Russia isn't much of an opponent. Hasn't been for over 3 decades. I would not make decisions about the air power of the USA based on russia. In under 30 years the USAF will have fielded 3 new fighters and Russia still struggles with one new idk what to call it... 4.5 gen aircraft.Indeed. Ukraine's Turkish-made Bayraktars seem to be little more sophisticated than cutting-edge hobbyist equipment. They have a "ludicrously" small payload. Yet they have been perhaps the most successful combat drones in history, while operating in the face of the much vaunted air defenses of the West's most sophisticated opponent. Actual, quadcopter hobbyist drones have proved decisive for artillery spotting and scouting for tank hunting teams. Some have even been used as ultralight bombers.I don't think the military really needs to get surgical with AI controlled assets. Simply overwhelming an enemy force with expendable drones is enough to do the job.
The value of these cheap platforms has derived not from the technology itself, essential though that is, but from the imaginative way in which they have been used to gain leverage on the real-world, here-and-now battlefield. The Ukrainians have skillfully matched the limited capabilities and payloads offered by the technology to the available range of targets, taking into account potential countermeasures.
The Ukrainians understand this technology--what it is, what it can do and what it is not and cannot do.
Your 17 year old cat is probably, a bit of an IQ genius compared to many folk out there.
A Douglas-Grumman hybrid, eh?Your 17 year old cat is probably, a bit of an IQ genius compared to many folk out there.
That´s why her name is Skycat.
A Douglas-Grumman hybrid, eh?Your 17 year old cat is probably, a bit of an IQ genius compared to many folk out there.
That´s why her name is Skycat.
SkyWulf?A Douglas-Grumman hybrid, eh?Your 17 year old cat is probably, a bit of an IQ genius compared to many folk out there.
That´s why her name is Skycat.
I also have a dog, you know. You know what he´s called?![]()
Russia isn't much of an opponent. Hasn't been for over 3 decades. I would not make decisions about the air power of the USA based on russia. In under 30 years the USAF will have fielded 3 new fighters and Russia still struggles with one new idk what to call it... 4.5 gen aircraft.Indeed. Ukraine's Turkish-made Bayraktars seem to be little more sophisticated than cutting-edge hobbyist equipment. They have a "ludicrously" small payload. Yet they have been perhaps the most successful combat drones in history, while operating in the face of the much vaunted air defenses of the West's most sophisticated opponent. Actual, quadcopter hobbyist drones have proved decisive for artillery spotting and scouting for tank hunting teams. Some have even been used as ultralight bombers.I don't think the military really needs to get surgical with AI controlled assets. Simply overwhelming an enemy force with expendable drones is enough to do the job.
The value of these cheap platforms has derived not from the technology itself, essential though that is, but from the imaginative way in which they have been used to gain leverage on the real-world, here-and-now battlefield. The Ukrainians have skillfully matched the limited capabilities and payloads offered by the technology to the available range of targets, taking into account potential countermeasures.
The Ukrainians understand this technology--what it is, what it can do and what it is not and cannot do.
The answer is: All of them. Every piece of data that can be collected will be thrown into a model and big data systems will be used to extract maximum value out of the information.One question: what is the "the profiling of all human aviators"? How is it done? What attributes, methods, and parameters do you include? How, for example, do you measure "skill" in order to differentiate it? What is "skill" in this context? What units, instruments, and protocols do you use when doing the measuring? Are we counting G-tolerance? eyesight aerobatic ability? ability to calculate fuel burn? navigational skill? Tactics? Strategy" Knowledge of rules of engagement/military law/international law? Good judgment? And, fi we are, how do we balance them against each other when arriving at a "profile"? Are the units and measurement methods appropriate to each common to all?For example, I expect the profiling of all human aviators (if skill differential is notable) and systems that enable real time identification via non-cooperative means. There'd be tactical "interactions" to collect this info and other things, and considerations in defeating/neutralizing each "human constraint" would be part of the combat model.
You don't need "intelligence", you just need behavior that leads to fulfillment of objectives. Design a scoring function on a model of reality and it reduces to a optimization problem that you can use a world of tools to compute.This is my core critique of "AI", as practised today. It pretends to be something that it cannot rigorously define. No one has come up with a reasonable definition of "intelligence". And without that, how do you know what you have implemented?
So, we somehow "outthink [an] opponent" without "intelligence"? This is just the kind of loose talk that drives sloppy "AI" "solutions" to non-problems.The answer is: All of them. Every piece of data that can be collected will be thrown into a model and big data systems will be used to extract maximum value out of the information
You first ... collect data on building predictive models on tactically relevant factors and observables, ideally parameters you can observe in the enemy while fighting. ... opponent model gets built up, ... cross correlated with things like personnel databases, peace time data collection and such ...
But that would just a small side project in the identify contacts by means of data fusion.... <snip>
The details is not something you can know beforehand, you can just ... stumble up exploitable information once a while. ...
Trillions of data points will be collected, thousands of ideas and models explored, hundreds of software changes is to be expected throughout a campaign. ...
-------
The air force making AI as lower bandwidth RC airplanes is a far cry from all integrating hyper-informational model that seeks to outthink opponent on all levels of conflict across...
You don't need "intelligence", you just need behavior that leads to fulfillment of objectives. Design a scoring function on a model of reality and it reduces to a optimization problem that you can use a world of tools to compute.This is my core critique of "AI", as practised today. It pretends to be something that it cannot rigorously define. No one has come up with a reasonable definition of "intelligence". And without that, how do you know what you have implemented?
SkyWulf?A Douglas-Grumman hybrid, eh?Your 17 year old cat is probably, a bit of an IQ genius compared to many folk out there.
That´s why her name is Skycat.
I also have a dog, you know. You know what he´s called?![]()
IIRC it was Target, not Amazon, and it was before the teen's father knew. She did now.AI as it stands now doesn’t think or necessarily make decisions well. What it can do, with unnerving accuracy, is find patterns in vast data sets in near real time. One of my favorite examples is Amazon software literally predicting pregnancy by shopping patterns, before the person involved necessarily knows.