anti armed robot tactics...

Moderators: Elvis, DrVolin, Jeff

Re: anti armed robot tactics...

Postby Joao » Tue Oct 22, 2013 8:47 pm

ImageMachine gun-toting robots deployed on DMZ
Jon Rabiroff, Stars and Stripes, July 12 2010

DEMILITARIZED ZONE, Korea — Security along the DMZ has gone high-tech, as South Korea has quietly installed a number of machine gun-armed robots to serve as the first line of defense against the potential advance of North Korean soldiers.

The stationary robots — which look like a cross between a traffic signal and a tourist-trap telescope — are more drone than Terminator in concept, operated remotely just outside the southern boundary of the DMZ by humans in a nearby command center.

Officials refuse to say how many or where the robots have been deployed along the heavily fortified border between the two Koreas, but did say they were installed late last month and will be operated on an experimental basis through the end of the year.

[...]


(I didn't search exhaustively but was unable to find a follow-up about any decisions made following the 2010 "experimental" phase.)

American Dream [w/minor formatting tweaks by Joao] » Wed Feb 27, 2008 8:10 pm wrote:
Automated killer robots 'threat to humanity': expert
(AFP) – Feb 26, 2008

PARIS — Increasingly autonomous, gun-totting robots developed for warfare could easily fall into the hands of terrorists and may one day unleash a robot arms race, a top expert on artificial intelligence told AFP. "They pose a threat to humanity," said University of Sheffield professor Noel Sharkey ahead of a keynote address Wednesday before Britain's Royal United Services Institute.

Intelligent machines deployed on battlefields around the world -- from mobile grenade launchers to rocket-firing drones -- can already identify and lock onto targets without human help.

There are more than 4,000 US military robots on the ground in Iraq, as well as unmanned aircraft that have clocked hundreds of thousands of flight hours. The first three armed combat robots fitted with large-caliber machine guns deployed to Iraq last summer, manufactured by US arms maker Foster-Miller, proved so successful that 80 more are on order, said Sharkey.

But up to now, a human hand has always been required to push the button or pull the trigger. It we are not careful, he said, that could change. Military leaders "are quite clear that they want autonomous robots as soon as possible, because they are more cost-effective and give a risk-free war," he said.

[...]
Joao
 
Posts: 522
Joined: Wed Jun 26, 2013 11:37 pm
Blog: View Blog (0)

Re: anti armed robot tactics...

Postby elfismiles » Thu Oct 24, 2013 10:40 am

I believe the US and Israel have some automated / remote-controlled machine-gun towers along portions of walls keeping the Palestinians in their prison and along the US/Mex border. Can't find the posts about this from around here yet but still looking... I think they are mentioned in this video by Daniel Suarez.

EDIT: Here is what I was remembering...

What’s Next in National Security Robo-Snipers, AutoKillZones
by General Patton » 05 Jan 2010 01:53
viewtopic.php?f=8&t=26510


elfismiles » 05 Jan 2010 18:59 wrote:thought this atrocity was familiar ... had seen an article on it back in 2008 but the OP was even earlier, 2007.

Where is the puke emoticon?

How long till DARPA proposes mounting such turrets on top of cell/microwave towers?


Israeli “Auto Kill Zone” Towers Locked and Loaded
By Noah Shachtman December 5, 2008 | 10:00 am | Categories: Crime and Homeland Security, Israel, Weapons and Ammo

Image

On the U.S.-Mexico border, the American government has been trying, with limited success, to set up a string of sensor-laden sentry towers, which would watch out for illicit incursions. In Israel, they’ve got their own set of border towers. But the Sabras’ model comes with automatic guns, operated from afar.

The Sentry Tech towers are basically remote weapons stations, stuck on stop of silos. "As suspected hostile targets are detected and within range of Sentry-Tech positions, the weapons are slewing toward the designated target," David Eshel describes over at Ares. "As multiple stations can be operated by a single operator, one or more units can be used to engage the target, following identification and verification by the commander."

We flagged the towers last year, as the Israeli Defense Forces were setting up the systems, designed to create 1500-meter deep "automated kill zones" along the Gaza border.

"Each unit mounts a 7.62 or 0.5" machine gun, shielded from enemy fire and the elements by an environmentally protective bulletproof canopy," Eshel explains. "In addition to the use of direct fire machine guns, observers can also employ precision guided missiles, such as Spike LR optically guided missiles and Lahat laser guided weapons."

http://www.wired.com/dangerroom/2008/12 ... i-auto-ki/






Re: One Drone Thread to Rule them ALL
by elfismiles » 04 Oct 2013 20:39

elfismiles » 04 Oct 2013 20:39 wrote:RELEASE THE AUTONOMOUS KILLING MACHINES!!!
Er, um ... I mean: JUST SAY NO, To Autonomous Killing Machines
... unless they killy jellyfish ...

TED Talk Warns Of The Dangers Of Autonomous Robots Making Lethal Decisions
Date: Jun 25, 2013 | Author: Joelle Renstrom
http://www.giantfreakinrobot.com/sci/sc ... sions.html

Daniel Suarez: The kill decision shouldn't belong to a robot
http://www.ted.com/talks/daniel_suarez_ ... robot.html


http://www.youtube.com/watch?v=pMYYx_im5QI

Image

A call for engineers to stop autonomous killing machines now.
By nsharkey on March 12, 2013
http://icrac.net/2013/03/a-call-for-eng ... hines-now/
http://www.theengineer.co.uk/military-a ... 20.article

“This is a call to engineers to stand up and demand the prohibition of autonomous lethal targeting by robots. I ask this of engineers because you are the ones who know just how limited machines can be when it comes to making judgments; judgments that only humans should make; judgments about who to kill and when to kill them.”

These Killer Autonomous Robots Are Already Switched On
By Brian Merchant
http://motherboard.vice.com/blog/these- ... e-trenches

United Nations
A/HRC/23/47
General Assembly
Distr.: General
9 April 2013
Original: English

Human Rights Council Twenty-third session Agenda item 3 Promotion and protection of all human rights, civil, political, economic, social and cultural rights, including the right to development

Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns

Summary
Lethal autonomous robotics (LARs) are weapon systems that, once activated, can select and engage targets without further human intervention. They raise far-reaching concerns about the protection of life during war and peace. This includes the question of the extent to which they can be programmed to comply with the requirements of international humanitarian law and the standards protecting life under international human rights law. Beyond this, their deployment may be unacceptable because no adequate system of legal accountability can be devised, and because robots should not have the power of life and death over human beings. The Special Rapporteur recommends that States establish national moratoria on aspects of LARs, and calls for the establishment of a high level panel on LARs to articulate a policy for the international community on the issue.

http://www.ohchr.org/Documents/HRBodies ... -47_en.pdf


Cosmic Cowbell » 04 Oct 2013 16:52 wrote:
Re: This week in jellyfish
by Cosmic Cowbell » 04 Oct 2013 16:52

Image

Jellyfish* are serious business. If you get enough of them in one place, bad things happen. And we're not just talking about some mildly annoying stings, but all-out nuclear war. Obviously, we have to fight back. With ROBOTS.

In South Korea, jellyfish are threatening marine ecosystems and are responsible for about US $300 million in damage and losses to fisheries, seaside power plants, and other ocean infrastructure. The problem is that you don't just get a few jellyfish. I mean, a few jellyfish would be kind of cute. The problem is that you get thousands of them. Or hundreds of thousands. Or millions, all at once, literally jellying up the works.

Large jellyfish swarms have been drastically increasing over the past decades and have become a problem in many parts of the world, Hyun Myung, a robotics professor at the Korean Advanced Institute of Science and Technology (KAIST), tells IEEE Spectrum. And they aren't affecting just marine life and infrastructure. "The number of beachgoers who have been stung by poisonous jellyfish, which can lead to death in extreme cases, has risen," he says. "One child died due to this last year in Korea."

So Professor Myung and his group at KAIST set out to develop a robot to deal with this issue, and last month, they tested out their solution, the Jellyfish Elimination Robotic Swarm (JEROS), in Masan Bay on the southern coast of South Korea. They've built three prototypes like the one shown below.


More...

http://spectrum.ieee.org/automaton/robo ... otic-swarm


elfismiles » 08 Oct 2013 16:49 wrote:I hate my work computer ... this fucking website wouldn't even display the freaking article - I had to "View Source" and cut and past the html into an html-to-bb-code converter just to read it :wallhead: :mad2

http://www.nationaljournal.com/national ... n-20131008

Soon, Drones May Be Able to Make Lethal Decisions on Their Own

Scientists, engineers and policymakers are all figuring out ways drones can be used better and more smartly, more precise and less damaging to civilians, with longer range and better staying power. One method under development is by increasing autonomy on the drone itself.

Eventually, drones may have the technical ability to make even lethal decisions autonomously: to respond to a programmed set of inputs, select a target and fire their weapons without a human reviewing or checking the result. Yet the idea of the U.S. military deploying a lethal autonomous robot, or LAR, is sparking controversy. Though autonomy might address some of the current downsides of how drones are used, they introduce new downsides policymakers are only just learning to grapple with.

The basic conceit behind a LAR is that it can outperform and outthink a human operator. "If a drone's system is sophisticated enough, it could be less emotional, more selective and able to provide force in a way that achieves a tactical objective with the least harm," said Purdue University Professor Samuel Liles. "A lethal autonomous robot can aim better, target better, select better, and in general be a better asset with the linked ISR [intelligence, surveillance, and reconnaissance] packages it can run."

Though the pace for drone strikes has slowed down -- only 21 have struck Pakistan in 2013, versus 122 in 2010 according to the New America Foundation -- unmanned vehicles remain a staple of the American counterinsurgency toolkit. But drones have built-in vulnerabilities that military planners still have not yet grappled with. Last year, for example, an aerospace engineer told the House Homeland Security Committee that with some inexpensive equipment he could hack into a drone and hijack it to perform some rogue purpose.

Drones have been hackable for years. In 2009, defense officials told reporters that Iranian-backed militias used $26 of off-the-shelf software to intercept the video feeds of drones flying over Iraq. And in 2011, it was reported that a virus had infected some drone control systems at Creech Air Force Base in Nevada, leading to security concerns about the security of unmanned aircraft.

It may be that the only way to make a drone truly secure is to allow it to make its own decisions without a human controller: if it receives no outside commands, then it cannot be hacked (at least as easily). And that's where LARs, might be the most attractive.

Though they do not yet exist, and are not possible with current technology, LARs are the subject of fierce debate in academia, the military and policy circles. Still, many treat their development as inevitability. But how practical would LARs be on the battlefield?

Heather Roff, a visiting professor at the University of Denver, said many conflicts, such as the civil war in Syria, are too complex for LARs. "It's one thing to use them in a conventional conflict," where large militaries fight away from cities, "but we tend to fight asymmetric battles. And interventions are only military campaigns -- the civilian effects matter."

Roff says that because LARs are not sophisticated enough to meaningfully distinguish between civilians and militants in a complex, urban environment, they probably would not be effective at achieving a constructive military end-- if only because of how a civilian population would likely react to self-governing machines firing weapons at their city. "The idea that you could solve that crisis with a robotic weapon is naïve and dangerous," she said.

Any autonomous weapons system is unlikely to be used by the military, except in extraordinary circumstances, argued Will McCants, a fellow at the Brookings Saban Center and director of its project on U.S. Relations with the Islamic World. "You could imagine a scenario," he says, "in which LAR planes hunted surface-to-air missiles as part of a campaign to destroy Syria's air defenses." It would remove the risk to U.S. pilots while exclusively targeting war equipment that has no civilian purpose.

But such a campaign is unlikely to ever happen. "Ultimately, the national security staff," he said, referring to personnel that make up the officials and advisers of the National Security Council, "does not want to give up control of the conflict." The politics of the decision to deploy any kind of autonomous weaponry matters as much as the capability of the technology itself. "With an autonomous system, the consequences of failure are worse in the public's mind. There's something about human error that makes people more comfortable with collateral damage if a person does it," McCants said.

That's not to say anyone is truly comfortable with collateral damage. "They'd rather own these kinds of decisions themselves and be able to chalk it up to human error," McCants said. Political issues aside, B.J. Strawser, assistant professor at the Naval Postgraduate School, says that LARs simply could not be used effectively in a place like Syria. "You'd need exceedingly careful and restrictive ROEs [rules of engagement], and I worry that anyone could carry that out effective, autonomous weapon or not," he said.

"I don't think any actor, human or not, is capable of carrying out the refined, precise ROEs that would enable an armed intervention to be helpful in Syria."

http://www.nationaljournal.com/national ... n-20131008
User avatar
elfismiles
 
Posts: 8511
Joined: Fri Aug 11, 2006 6:46 pm
Blog: View Blog (4)

Previous

Return to General Discussion

Who is online

Users browsing this forum: No registered users and 40 guests