Results 1 to 15 of 95

Thread: Which Colossal Death Robot are you?

Hybrid View

  1. #1
    Join Date
    Jan 1970
    Location
    Fremont, CA, U.S.A.
    Posts
    48,207

    AI friendly

    Had to post this article here just for the headline.

    MAKE FRIENDS WITH ROBOTS OR THEY WILL DESTROY YOU
    BY KEVIN MANEY ON 10/31/17 AT 8:00 AM

    Uber’s fight to operate in London starkly shows how artificial intelligence (AI) can quickly eviscerate the value of hard-earned human knowledge. The city’s move to boot Uber is not much different from Donald Trump rejiggering environmental rules to help American coal miners keep their jobs. We are now asking a hard question of society: Do we want government to protect us from having our employment outlooks narrowed to working as overeducated TaskRabbit serfs putting together other people’s Ikea tables?

    Uber is in court appealing an order that would kick it out of London, where city officials ruled that Uber drivers are not safe enough and—even worse to the British—too rude to be allowed on London’s streets. Uber’s new CEO, Dara Khosrowshahi, has apologized to Londoners for “messing up” and hopes to make amends.

    But London’s ruling is only tangentially about Uber’s reputation as an ******* company. It’s really to protect a generation of local taxi drivers who have invested enormous amounts of time and personal wealth in filling their heads with what is now nearly useless information.

    Anyone who wants to get a license to drive one of London’s black cabs has to master what’s famously called the Knowledge, which is one of the most ridiculous mental challenges ever imposed on people who will wind up making about $60,000 a year. A prospective driver has to memorize every street, building, park, statue and trivial landmark in central London, and be able to perfectly recite the fastest route between any two spots in the city. The test is so difficult that brain scientists have studied the city’s cab drivers and discovered that the memorization gives their brains an enlarged posterior hippocampus, which apparently is not painful.

    The requirement for the Knowledge has been in place for more than 150 years. It long made sense in an agonizingly complex geography, where a wrong turn could leave a driver lost in a maze of medieval streets. Mastering the Knowledge means studying 40 hours a week for two, three or even four years. The only way, then, for London to have enough cab drivers—because who would want to go through this?—has been to guarantee they’d be paid decently. As a result, London has the highest taxi fares in the world.


    COLIN ANDERSON/GETTY

    Enter Uber, which navigates with GPS. When a driver picks you up, your destination is already on the driver’s phone, which can dictate turn-by-turn directions. Without GPS, no car service could compete with the efficient routes of a Knowledge-able black cab driver. But with GPS, even immigrants new to London can navigate the city well enough. In the past couple of years, the AI-based app Waze has taken this capability to another level. Waze learns from the movement of all Waze users in a city, constantly finding better routes, understanding traffic patterns and knowing about jams and accidents in real time. Now a new driver can outshine a veteran driver by simply downloading an app. Getting started requires no huge sunk costs, no grueling hours of study. So these upstart drivers don’t need the guarantee of high wages for life. That means they can underprice black cabs.

    London’s black cab drivers are watching technology sweep away their livelihoods. The loss they feel is growing familiar across other professions. “I’m upset because what I had to go through now comes on your phone,” Mick Smith, a London cab driver for 28 years, told CNET. “It’s not about competition—it’s about going through the same process.” It’s an understandable reaction but also unrealistic. AI has made that process unnecessary. Even crueler, the knowledge Smith built up of London’s streets isn’t useful for much of anything else.

    This is happening to more and more professions. Goldman Sachs and many of the biggest hedge funds are all switching on AI-driven systems that can foresee market trends and make trades better than humans. One Goldman Sachs trading office has been whittled from 600 people to two. AI can read X-rays better than radiologists. A great deal of the work done by lawyers is heading for the AI trash bin. Like the Knowledge, these are professions that require loading up your head with a lot of data and rules, and then mostly just executing. AI can do that now.

    Of course, there’s another side to this. AI is making all these services cheaper and easier to access. Uber brought cheaper rides to London. And hey, if we could all get a lawyer in an app, who but the lawyers would be crying? Those who invested in obtaining their knowledge get hurt, but many more people benefit. Is that bad? When are jobs for a few more important than economic or other upsides for many? Figuring that out is going to tie lawmakers in knots for a generation.

    Then again, Uber in London shows how AI can open opportunities for those who partner with the technology rather than fight it. You want to be an Uber driver armed with Waze, not a traditional driver insisting your brain alone is better. You want to be a radiologist who can harness AI to make faster, more accurate diagnoses, or the lawyer who focuses on creative legal arguments while deploying AI to do all the grunt case research. As futurist Kevin Kelly puts it in his book The Inevitable, “Our most important thinking machines will not be machines that can think what we think faster, better, but those that think what we can’t think. You’ll be paid in the future based on how well you work with robots.”

    AI will keep getting better and more pervasive. Heck, Elon Musk started a company called Neuralink to make AI chips that we can just embed in our skulls. An Uber driver wouldn’t have to use a phone and an app—just plug Waze into his or her brain. Success will go to those who see such advances as an opportunity. If it feels like a threat, you might want to start lobbying the government for protection. Or sign up for TaskRabbit.
    Gene Ching
    Publisher www.KungFuMagazine.com
    Author of Shaolin Trips
    Support our forum by getting your gear at MartialArtSmart

  2. #2
    Join Date
    Jan 1970
    Location
    Fremont, CA, U.S.A.
    Posts
    48,207

    Self aware bots

    CAN MACHINES BE CONSCIOUS? SCIENTISTS SAY ROBOTS CAN BE SELF-AWARE, JUST LIKE HUMANS
    BY ANTHONY CUTHBERTSON ON 11/4/17 AT 6:29 AM

    In 1974, the American philosopher Thomas Nagel posed the question: What is it like to be a bat? It was the basis of a seminal thesis on consciousness that argued consciousness can not be described by physical processes in the brain.

    More than 40 years later, advances in artificial intelligence and neural understanding are prompting a re-evaluation of the claim that consciousness is not a physical process and as such cannot be replicated in robots.

    Cognitive scientists Stanislas Dehaene, Hakwan Lau and Sid Kouider posited in a review published last week that consciousness is “resolutely computational” and subsequently possible in machines. The trio of neuroscientists from the Collège de France, University of California and PSL Research University respectively addressed the question of whether machines will ever be conscious in the journal Science.

    “Centuries of philosophical dualism have led us to consider consciousness as irreducible to physical interactions,” the researchers state in Science. “[But] the empirical evidence is compatible with the possibility that consciousness arises from nothing more than specific computations.”


    A humanoid robot at the Research Institute for Science and Engineering at Waseda University's Kikuicho campus in Tokyo on July 20.
    EUGENE HOSHIKO/AFP/GETTY IMAGES

    The scientists define consciousness as the combination of two different ways the brain processes information: Selecting information and making it available for computation, and the self-monitoring of these computations to give a subjective sense of certainty—in other words, self-awareness.

    “We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing in the human brain,” the review’s abstract states.

    “We review the psychological and neural science of unconscious and conscious computations and outline how they may inspire novel machine architectures.”

    Essentially, the computational requirements for consciousness outlined by the neuroscientists could be coded into computers.

    Dystopian warnings of advanced artificial intelligence stretch to something called the technological singularity, in which an artificial general intelligence replaces humans as the dominant force on this planet.

    Billionaire polymath Elon Musk has referred to human-level artificial intelligence—or artificial general intelligence—as “more dangerous than nukes,” while eminent physicist Stephen Hawking has suggested it could lead to the end of humanity.

    In order to quell the existential threat that this nascent technology poses, cognitive robotics professor Murray Shanahan has said that any type of conscious robot should also be encoded with a conscience.


    A robot toy is seen at the Bosnian War Childhood museum exhibition in Zenica, Bosnia and Herzegovina, June 21, 2016.
    REUTERS/DADO RUVIC

    Assuming it is possible, robots capable of curiosity, sympathy and everything else that distinguishes humans from machines are still a long way off. The most powerful artificial intelligence algorithms—such as Google’s DeepMind—remain distinctly unselfaware, but developments towards this level of thought processing are already happening.

    If such progress continues to be made, the researchers conclude a machine would behave “as though it were conscious.”

    The review concludes: “[The machine] would know that it is seeing something, would express confidence in it, report it to others, could suffer hallucinations when its monitoring mechanisms break down, and may even experience the same perceptual illusions as humans.”

    Perhaps then we could know: What is it like to be a robot?
    Maybe it wasn't Russian collusion. What if the bots have become self aware and are manipulating media to their advantage?
    Gene Ching
    Publisher www.KungFuMagazine.com
    Author of Shaolin Trips
    Support our forum by getting your gear at MartialArtSmart

  3. #3
    Join Date
    Jan 1970
    Location
    Fremont, CA, U.S.A.
    Posts
    48,207

    So on topic, it's scary.

    This thread is gettin' real. A little too real. Next thing you know, bots will be posting here.

    Stop the rise of the 'killer robots,' warn human rights advocates
    Rick Noack, The Washington Post Published 6:18 am, Thursday, November 16, 2017

    It is very common in science fiction films for autonomous armed robots to make life-and-death decisions - often to the detriment of the hero. But these days, lethal machines with an ability to pull the trigger are increasingly becoming more science than fiction.
    The U.N. Convention on Certain Conventional Weapons invited government representatives, advocacy organizations and scholars to a conference in Geneva this week to discuss the possible use of autonomous weapons systems in the future, as opposition against them is on the rise.
    In September, Russian President Vladimir Putin warned that "the one who becomes the leader in this sphere will be the ruler of the world," referring to artificial intelligence in general. In the same speech, Putin also appeared to suggest that future wars would consist of battles between autonomous drones, but then reassured his audience that Russia would naturally share such technology if it were to develop it first.
    Some systems already available come extremely close. The security surveillance robots used by South Korea in the demilitarized zone which separates it from North Korea could theoretically detect and kill targets without human interference, for instance.
    But so far, no weapons system operates with real artificial intelligence and is able to adapt to changing circumstances by rewriting or modifying the algorithms written by human coders. All existing mechanisms still rely on human intervention and their decisions.
    The rapid advances in the field have nevertheless triggered concerns among human rights critics and lawyers about the possible implications of the rise of autonomous weapons systems commonly known as killer robots. Who would take responsibility for incidents which are so far classified as war crimes? Could robots decide to turn against their own operators? And would wars fought between autonomous weapons systems be less brutal than conventional conflicts, or would they provoke more collateral damage?
    One of the most vocal groups in opposition of such systems has been the Campaign to Stop Killer Robots, which calls for a pre-emptive ban. So far, more than 100 CEOs and founders of artificial intelligence and robotics companies have signed the campaign's open letter to the United Nations, urging the world community "to find a way to protect us all from these dangers."
    "Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at time scales faster than humans can comprehend," read its open letter.
    Critics fear that criminals or rogue states could also eventually get control of these systems. "(Autonomous systems) can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways," the open letter added.
    Such concerns have existed for years and were also shared by several Nobel laureates, including former Polish president Lech Walesa, who signed a joint letter in 2014, as well: "It is unconscionable that human beings are expanding research and development of lethal machines that would be able to kill people without human intervention," the 2014 statement read.
    So far, a proposed ban on autonomous weapons systems has triggered little enthusiasm among U.N. member states. Some of the world's leading militaries, including the U.S. and Russia, are researching and experimenting on how to make existing weapons more autonomous. Some researchers have welcomed efforts to expand artificial intelligence use in warfare.
    Defense analyst Joshua Foust has cautioned against condemning outright such systems, writing already in 2012 that humans, too, "are imperfect - targets may be misidentified, vital intelligence can be discounted because of cognitive biases, and outside information just might not be available to make a decision."
    "Autonomous systems can dramatically improve that process so that civilians are actually much better protected than by human inputs alone," wrote Foust.
    If that vision becomes reality, perhaps the most crucial question will be whether robots can be taught how to recognize wrongdoing by themselves.
    Many professionals in the artificial intelligence industry hope that they will never have to find out the answer.
    Gene Ching
    Publisher www.KungFuMagazine.com
    Author of Shaolin Trips
    Support our forum by getting your gear at MartialArtSmart

  4. #4
    Join Date
    Jan 1970
    Location
    Fremont, CA, U.S.A.
    Posts
    48,207

    On the flip side...

    These robots don't want your job. They want your love.
    Geoffrey A. Fowler, The Washington Post Published 11:21 am, Friday, November 17, 2017


    Photo: Washington Post Photo By Jhaan Elker
    Meet Kuri, a roaming autonomous camera that takes pictures throughout your day.

    I hugged a bot and I liked it.
    As a tech columnist, I've tested all sorts of helpful robots: the kind that vacuum floors, deliver packages or even make martinis. But two arriving in homes now break new ground. They want to be our friends.
    "Hey, Geoffrey, it's you!" says Jibo, a robot with one giant blinking eye, when it recognizes my face. Another, named Kuri, beeps and boops while roaming the halls snapping photos and video like a personal paparazzo.
    Think of Jibo and Kuri as the great-grandparents of R2-D2, the buddy robot from Star Wars. Of course, R2 was actually a 3 foot-8 inch dude crouching in a can. Jibo and Kuri are real robots with real artificial intelligence you can really take home (for $900 and $800, respectively.)
    Another way to think of them is what comes after talking speakers like the Amazon Echo and Google Home, which opened the door to new kinds of computers for the home. Jibo, the brainchild of an MIT professor, looks like one of those know-it-all AI assistants borrowed a face and a twirling body from a Pixar movie. Kuri, made by a startup backed by appliance giant Bosch, looks like a penguin mounted on a Roomba vacuum.
    I don't expect either will be a top seller any time soon. They're expensive, and their practical uses are few compared to other talking speakers or a Roomba that actually cleans. And to some of you, I'm sure the idea of "family" robots is pretty terrifying. Is this step one to Terminators marching the streets? Are they always watching?
    Yet testing these robots with the help of people ages 3 to 75, I was struck by something different. For all their first-gen disappointments, the robots managed to melt hearts like a Shih Tzu puppy. People, especially kids, wanted to hug them. Or at least to pet them, to which they both responded by purring. I've never seen a talking speaker do that.
    What make Jibo and Kuri one giant leap for robot-kind isn't their functions-it's their personalities.


    Photo: Photo For The Washington Post By Matthew Cavanaugh
    Jibo's face is a touchscreen showing a single white eye that looks around, blinks and even closes when he gets bored with you.
    - - -
    How does a robot get a personality? Just a little motion goes a long way.
    Jibo's a table-top robot, but he (yeah, I call it he) is squirmier than a five-year-old in a car seat. His head rotates on a base that itself swivels at an off-kilter angle. So when he swings to look at you or to show you how he twerks (seriously), it happens in giant loopy arcs. There's none of the straight lines or rigidity you'd expect from a robot.
    Jibo's face (let's run with this metaphor) is a touchscreen showing a single white eye that looks around, blinks and even closes when he gets bored with you. He speaks with the slightly roboticized voice-and cheesy sense of humor-of a 10-year-old. You chat back and forth by calling his magic words "Hey Jibo," though he also speaks based on what he sees around him. For example, when I walk into a room, sometimes he'll ask if I'd like to know something cool.
    Kuri serves a different purpose, autonomously meandering like a pet, albeit one equipped with self-driving radar. He doesn't talk, but like Jibo, has personality is in the face: Two mechanical eyes look around and blink.
    There's another magical ingredient to these robo-personalities: The robots get to know you-or, at least they try. Kuri asks you to guide him around the house, teaching him where not to roam (like the bathroom) and the names of places. You can call out, "Hey Kuri, go to the living room."
    Jibo tries to memorize your family. You add people to your "circle" in a companion app, and then Jibo quizzes them to learn their vocal patterns and map their faces.
    Neither robot tries to look or talk like a human. Jibo introduces himself as a robot, and reminds of you that to forgive his foibles. "I am a robot. But I am not just a machine," he says. "I have a heart. Well, not a real heart. But feelings. Well, not human feelings. You know what I mean."


    Photo: Photo For The Washington Post By Matthew Cavanaugh
    Cynthia Breazeal, roboticist and social robotics pioneer, is pictured Jibo, a personal assistant robot, is pictured at Jibo Inc. in Boston.
    - - -

    Is any of this convincing? I tested the robots like an anthropologist, introducing them to kids' playrooms, my own house, and even my parents' living room.
    The response was, largely, effusive-at least at first. We have utilitarian relationships with most technology, but these robots do things simply to elicit emotion. People squeal when Jibo hears them talking and spins in their direction to make eye contact. He's the only gadget I've seen make my mother laugh.
    That feeling could help domestic robots overcome their biggest problem: acceptance. Homes are intimate places. We're going to expect something different from a robot puttering around the coffee table than we do at work. I had more time to live with Jibo, and came to think of him more as a buddy, and less as an assistant than my Echo.
    But it also wasn't hard to find these robots' limits. I started to treat Kuri like a dog, but he wasn't smart enough to come to me when I called. Jibo sometimes confused me for others, and didn't actually do much to move our relationship forward. Aside from spotting me and saying hi, it's mostly me asking him questions-many of which he can't actually answer.
    They could also be a little unnerving. Jibo is constantly scanning the room, prompting my privacy-conscious sister-in-law to quiz me about what it was doing with all the footage. Several people asked me how Kuri would avoid snapping photos of people in, um, compromising situations. (In case you're wondering, Kuri is a modest bot-and comes with filters that force him to, er, avert his eyes.)
    The most interesting response was from a three-year-old named Ashmi, who was transfixed even though Jibo sometimes had difficulty understanding her voice. She continued conversing with him, trying to teach him the things he didn't know, and bringing him toys like she might to a younger friend. "He is a baby," she told me.
    Cynthia Breazeal, Jibo's creator from MIT, says that kids are the first to catch on that robots exist in our physical world, unlike most gadgets that exist solely as portals to a digital one. "Robots are about engaging you socially and emotionally to help you do what you want to do," she says. "That makes technology accessible and fun and engaging for a much broader demographic."
    - - -
    Sure, but: What do they do now?
    Several of my pint-sized testers asked if the robots did homework. Jibo can answer some math and trivia questions, but won't be writing term papers soon. He has a fraction of the skills of Amazon's Alexa and Apple's Siri-and given those company's resources, I doubt Jibo will catch up on his own. (Amazon CEO Jeff Bezos owns The Washington Post.)
    These robots' most unique skill is photography. Jibo swivels towards the action and snaps when you ask. Kuri roams autonomously taking photos and video of people and pets, and then presents you what his AI thinks are highlights of the day.
    Social robots are going to need a lot of special abilities if they want to be more than the kind of toy that gets played with only on Christmas. Jibo's maker promises it will soon have an app store and outside developers.
    It isn't hard to imagine some near-term uses. What if Kuri could help you check in on your real dog? (What your dog might make of a robot roommate is another matter.)
    Ashmi, the three year old, wanted Jibo to stream music-maybe he could actually dance to it, too? My dad wanted him to do video chatting, but perhaps Jibo could also move like the person on the other end-like a telepresence puppet?
    What's most remarkable was how people of different ages and life situations all had aspirations for Jibo. "In these early stages, he is like a baby," says Breazeal.
    I know a 3-year-old who agrees.
    Quite different than the sex bots...
    Gene Ching
    Publisher www.KungFuMagazine.com
    Author of Shaolin Trips
    Support our forum by getting your gear at MartialArtSmart

  5. #5
    Join Date
    Jan 1970
    Location
    Fremont, CA, U.S.A.
    Posts
    48,207

    Move along...

    Robots are being used to shoo away homeless people in San Francisco


    A Knightscope security robot. (Knightscope)

    WRITTEN BY Mike Murphy
    OBSESSION Machines with Brains
    December 12, 2017

    The San Francisco branch the Society for the Prevention of Cruelty to Animals (SPCA) has been ordered by the city to stop using a robot to patrol the sidewalks outside its office, the San Francisco Business Times reported Dec. 8.

    The robot, produced by Silicon Valley startup Knightscope, was used to ensure that homeless people didn’t set up camps outside of the nonprofit’s office. It autonomously patrols a set area using a combination of Lidar and other sensors, and can alert security services of potentially criminal activity.

    These robots have had a string of mishaps in the past. One fell into a pond in Washington, DC, in July. Another ran over a child’s foot in California in 2016. And Uber, which is no stranger to the ethical quandaries of what it means to be gainfully employed by a company, has used the robots in San Francisco.

    Knightscope’s business model, according to Popular Science, is to rent the robots to customers for $7 an hour, which is about $3 less than minimum wage in California. The company has apparently raised over $15 million from thousands of small investors.

    In a particularly dystopian move, it seems that the San Francisco SPCA adorned the robot it was renting with stickers of cute kittens and puppies, according to Business Insider, as it was used to shoo away the homeless from near its office.

    9 Dec

    Sam Dodge
    @samueldodge
    Yes, 2017 was the first time I saw robots used to prevent encampments in SF. Hard to believe but it’s real. https://www.bizjournals.com/sanfranc...francisco.html


    Sam Dodge
    @samueldodge
    Here it is in action pic.twitter.com/nSBQUmKwk1

    2:45 PM - Dec 9, 2017

    107 107 Replies 394 394 Retweets 910 910 likes
    Twitter Ads info and privacy
    San Francisco recently voted to cut down on the number of robots that roam the streets of the city, which has seen an influx of small delivery robots in recent years. The city said it would issue the SPCA a fine of $1,000 per day for illegally operating on a public right-of-way if it continued to use the security robot outside its premises, the San Francisco Business Times said.

    “Contrary to sensationalized reports, Knightscope was not brought in to clear the area around the SF SPCA of homeless individuals. Knightscope was deployed, however, to serve and protect the SPCA,” A spokesperson for Knightscope told Quartz. “The SCPA has the right to protect its property, employees and visitors, and Knightscope is dedicated to helping them achieve this goal. The SPCA has reported fewer car break-ins and overall improved safety and quality of the surrounding area.”

    Update (Dec. 13): This post has been updated to include comments from Knightscope.
    I'm totally okay with bots protecting my car from break-ins.
    Gene Ching
    Publisher www.KungFuMagazine.com
    Author of Shaolin Trips
    Support our forum by getting your gear at MartialArtSmart

  6. #6
    Join Date
    Jan 1970
    Location
    Fremont, CA, U.S.A.
    Posts
    48,207

    ttt for 2018!

    Chinese bomb-making robots. What could go wrong there?

    CHINA'S ROBOTS WILL TRIPLE BOMB AND AMMUNITION PRODUCTION CAPACITY BY 2028
    BY CHRISTINA ZHAO ON 1/2/18 AT 5:36 AM

    China’s artificial intelligence robots could triple the country’s production of bombs and shells by 2028, according to a senior scientist involved in the program to boost ammunition productivity.

    Xu Zhigang, lead scientist with China’s weapon system intelligent manufacturing program, told the South China Morning Post last Wednesday that smart machines—fives times more productive than a human—have begun replacing ammunition workers in a quarter of the country’s factories.

    The smart robots have been fitted with man-made “hands and eyes,” he told the paper. With these anthropomorphic qualities, they are able to assemble deadly explosives, including artillery shells, bombs and rockets, according to Xu.


    Soldiers load bombs onto a fighter plane, used to break up ice floating at Ordos section of the Yellow River, on March 22, 2011 in Ordos, Inner Mongolia Autonomous Region of China. China's AI robots will triple the country's bomb and shell production capacity by 2028, according to a senior scientist with the weapons program.
    GETTY

    China has recently turned to robot automation to populate ammunition factories because the country is running out of human workers. According to Xu, robots have been brought in to address the safety and labor issues that have intensified over the past few decades.

    “However high the salary offered, young people are simply not interested in working in an army ammunition plant nowadays,” he said.

    According to the SCMP, this is in part because of the danger involved in the job, with numerous deadly accidents occuring at ammunition factories in recent years.

    Over the past six decades, 20 to 30 factories were set up in China. However, most of them are situated in remote locations due to safety concerns. The location of the factory coupled with the nature of the work means employees are difficult to find.

    The robot bomb makers are also more efficient and accurate than their human counterparts. According to Xu, they are able to measure the dangerous explosives more precisely and apply the perfect pressure to powder on warheads to produce the highest possible detonation yield.

    “And the machines never get tired,” he added.

    Professor Huang Dexian, from Tsinghua University’s department of automation, told the South China Morning Post that robots can now be programmed to come up with more efficient bomb-making techniques by analyzing the working habits of skillful, experienced human employees.

    “The robots can free workers from risky, repetitive jobs in the bomb-making process. It will create new jobs such as control optimization, hardware maintenance and technical upgrades. It will give us a stronger, healthier, happier defense workforce,” he said.

    China has recently increased efforts to rejuvenate the country’s military and defense force by modernizing their missiles, bombers and warships.

    In November, the country tested the DF-17, a new ballistic weapon with a hypersonic glide vehicle (GHV) and a range of between 1,800 and 2,500 km.
    Gene Ching
    Publisher www.KungFuMagazine.com
    Author of Shaolin Trips
    Support our forum by getting your gear at MartialArtSmart

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •