Social discovery app Shouut raises $500,000

Social discovery app Shouut raises $500,000
Shouut, a social discovery app by Giant Tech Labs Pvt Ltd. has raised $500,000 in angel funding from an undisclosed high net-worth individual angel investor, based in India.

“Shouut has been created to make people’s life easier with easy discovery for users, effective proximity marketing channel for brands along with trusted premium content,” said Praveer Kochhar, founder of Shouut.

For users, Shouut enables discovery of places, events and offers around the users’ location and for businesses, the platform enables amplification of business deals to users around their business location.

To ensure user stickiness, Shouut publishes content across various categories like food & drink, nightlife, shopping, travel and stay and leisure and outdoor. The platform aggregates over 150 expert bloggers, travel writers, publications and experience providers to generate content on the platform and has more than 25,000 recommendations across categories, which enable discovery in over 1100 towns in India.

Some of the premium content partners are Lonely Planet India magazine, So Delhi, The Daily Pao and Kunzum Travels. In the retail space the platform is associated with Shoppers Stop, Croma, Reliance Digital, DLF Promenade, Benetton and many more.
Shouut has recently rolled out a new program called Shout Fund — a platform that provides bloggers, travellers, and passionate people to discover their city or the country. In its first leg, the company is pledging Rs.1 crore and aims to fund over 2000 such ideas. The key focus of the initiative is to source content recommendations across the country and enabling people to live their dreams, the company is working towards bringing brands on board to increase the total funding kitty for this initiative.
Launched in February 2016, the platform took about 10 months to develop the product and was launched as an Android app.

Intel names Nivruti Rai as India head

<p>Nivruti Rai will head the R&D centre in India apart from managing the Platform Engineering Group (PEG) of Intel.</p>
World’s largest silicon chip makerIntel has named Nivruti Rai as Intel India’s general manager. Rai succeedsKumud Srinivasan, who will relocate to the US after the completion of her assignment in India.

Rai has been with Intel for over 20 years and has held several technical and business management positions across different functions both in the US and India. Rai will head the R&D centre in India, apart from managing the Platform Engineering Group (PEG) of the California-based semiconductor company.

“The Intel facilities in Bengaluru and Pune are key design sites, involving advanced engineering capabilities across the computing spectrum from servers to IoT and wearables. India, which also has seen remarkable growth in the last few years, also presents us with opportunity to innovate and design relevant products and solutions to grow technology adoption in the country,” said Rai, who had been working in India for the past 10 years.

She holds a Master of Sciences degree from the University of Lucknow, India, and a MS in Electrical Engineering from Oregon State University (OSU).

Srinivasan said: “I have seen the site grow, innovate and contribute significantly to Intel’s success worldwide. We are fortunate to have a strong and stable leadership team at the site with a vision for innovation and growth.”

Rai will partner closely with Debjani Ghosh, who heads Intel’s sales and marketing in South Asia. Ghosh will continue to be responsible for establishing new growth areas for Intel in the South Asian region, the company said. “Ghosh will continue leading strategic engagements with governments and industry in these countries to establish policies and initiatives that help accelerate the adoption of technology in the region, especially as an enabler of inclusive growth and development,” the company said.
Intel has over 7,500 employees in India and is the largest non-manufacturing site outside the US.

Japan driverless taxi startup eyes partnerships with automakers

Robot Taxi plans to take on Japan's famously immaculate cabs in urban centres and remote, rural towns where public transport for elderly residents is limited.Japan’s Robot Taxi aims to forge partnerships with carmakers to develop a driverless taxi service in time for the 2020 Olympics, the technology company said, holding its first tests on public roads and joining a global race to develop self-driving cars.

The joint venture between gaming software maker DeNA Co and robotics developer ZMP has set the 2020 Games in Tokyo as a target to develop software to operate driverless cars and an online service to ferry athletes and tourists between Olympic venues and the city’s transport hubs.

On Monday, it launched an initial 10-day field test in which selected residents of Fujisawa, around 45 km (28 miles) south of Tokyo, can summon a Robot Taxi online or from their smartphones, to take them to a local supermarket and back home.

The test uses Toyota Motor Corp Estima minivans equipped with its “Robovision” stereo camera and data processing system, although the company has said that it is open to working with all carmakers to supply vehicles.

Robot Taxi plans to take on Japan’s famously immaculate cabs in urban centres and remote, rural towns where public transport for elderly residents is limited. The company plans to focus on systems development rather than building vehicles from scratch.

“It’s difficult to make a car from the ground up when you consider production cost and safety, and we have the world’s best automakers already doing that here in Japan,” Robot Taxi chairman Hisashi Taniguchi told reporters.

“Our strategy is to keep our costs low by partnering with automakers for the hardware, and to keep those production costs low while we create both the technology and the service,” he added.

 The world’s largest automakers are competing with technology firms to create self-driving and driverless cars, with companies from Google to Toyota investing heavily in developing both hardware and software.
General Motors said in January it would invest $500 million in Lyft and laid out plans to develop an on-demand network of self-driving cars with the ride-sharing service.
The Japanese government pledged late last year to ease regulations to allow for self-driving cars to be tested on more public roads.
Still, from the standpoint of regulation and development, Japan is seen lagging behind other countries including Germany and the United States.

s hiring, posts manufacturing jobs

Deep-ocean sound waves may aid tsunami detection

Acoustic-gravity waves are very long sound waves that cut through the deep ocean at the speed of sound.Scientists are developing a system that may help predict a tsunami by detecting sound waves that race through the deep ocean more than 10 times faster than the more destructive wave.

“Severe sea states, such as tsunamis, rogue waves, storms, landslides, and even meteorite fall, can all generate acoustic-gravity waves,” said Usama Kadri, a research affiliate at Massachusetts Institute of Technology’s (MIT).

“We hope we can use these waves to set an early alarm for severe sea states in general and tsunamis in particular, and potentially save lives,” Kadri said.

Acoustic-gravity waves are very long sound waves that cut through the deep ocean at the speed of sound.

These lightning-quick currents can sweep up water, nutrients, salts, and any other particles in their wake, at any water depth.

They are typically triggered by violent events in the ocean, including underwater earthquakes, explosions, landslides, and even meteorites, and they carry information about these events around the world in a matter of minutes.

Researchers at MIT have now identified a less dramatic though far more pervasive source of acoustic-gravity waves — surface ocean waves.

These waves, known as surface-gravity waves, do not travel nearly as fast, far, or deep as acoustic-gravity waves, yet under the right conditions, they can generate the powerful, fast-moving, and low-frequency sound waves.

The researchers have developed a general theory that connects gravity waves and acoustic waves.

They found that when two surface-gravity waves, heading towards each other, are oscillating at a similar but not identical frequency, their interaction can release up to 95% of their initial energy in the form of an acoustic wave, which in turn carries this energy and travels much faster and deeper.

This interaction may occur anywhere in the ocean, in particular in regions where surface-gravity waves interact as they reflect from continental shelf breaks, where the deep-sea suddenly faces a much shallower shoreline.

Understanding this relationship between surface-gravity waves and acoustic-gravity waves allows researchers to describe how energy is exchanged between gravity and acoustic waves, researchers said.

This energy could be vital for many marine life forms, and it could play a role in water transport and the redistribution of carbon dioxide and heat to deeper waters, thereby sustaining a healthy marine environment, they said.
Kadri calculated that if two surface waves flow towards each other at roughly the same frequency and amplitude, as they meet and roll through each other the majority of their energy (up to 95%) can be turned into a sound wave, or acoustic-gravity wave.
This new understanding of wave interactions can be used for tsunami detection, researchers said.
The study was published in the Journal of Fluid Mechanics.

Google starts selling View Master, Cardboard VR via its official store

Google Tech C1-Glass VR viewerThe search engine giant Google has started selling its Cardboard VR Viewers on its official US store. The store has a separatevirtual reality page that consist the lists of all the cardboard viewers. The Google Tech C1-Glass VR viewer is priced at $14.99 and the Mattel View Master VR Starter comes at a price tag of $29.99

The Mattel View-Master VR Starter pack is designed for kids and it also offers children friendly apps and videos on YouTube. The VR content incorporates world-famous landmarks, immersive video games and others things. The device comes with a comfortable rubber eyepiece which claims to offers comfortable fit for kids and also helps in blocking the extra light from entering the device.

On the other hand, the C1- Glass Cardboard VR Viewer flaunts a lightweight and portable design. It includes a microfiber bag which makes it easier for the user to carry the headset and also keep it clean. Users can easily adjust the grip of their smartphones and does not have to protect the light from entering the headset. This one also works with Android and iOS smartphones with a display size of 4 to 6-inches.
Last month, a report suggested that Google created a virtual reality (VR) computing division and moved Clay Bavor, the executive running its product management team, to run the new arm.

Technology initiatives to boost Digital India drive

The slew of new platforms and digital literacy training programmes in rural areas announced in the 2016 budget will help the government implement its flagship Digital India programme and spur business for Indian IT companies.

Industry experts had said that the programme lacked a firm roadmap to implementation. Also helping is the fact that some of the platforms announced have a dedicated budget allocation.

“The Digitisation of the government sector, like setting up of Digital Literacy mission which will cover six crore rural households in India ensures transparency,” Anil Valluri, president, NetApp India & SAARC said.

 The budget announced 11 technology initiatives including the use data analytics to nab tax evaders, creating a substantial opportunity for IT companies to build out the systems that will be required.
Though, experts say, issues in procurement will still have to be ironed out before large-scale participation by IT companies is seen.
“It creates significant opportunity for IT companies to participate in large domestic projects. Although the related procurement and
payment processes will need to be revisited to excite IT companies,” Sanjoy Sen, Doctoral Research Scholar, Aston Business School, UK
and former partner at consultancy Deloitte, said.

Nasa probe observes Mars moon Phobos in new light

Nasa probe observes Mars moon Phobos in new light
Nasa scientists are closer to solving the mystery of how Phobos was formed, by using the spectral images of the Mars’ moon captured in ultraviolet by the Maven mission.

In late November and early December last year, Nasa’s Mars Atmosphere and Volatile Evolution (Maven) mission made a series of close approaches to the Martian moon Phobos, collecting data from within 500 kilometres of the moon.

Among the data returned were spectral images of Phobos in the ultraviolet.

The images will allow Maven scientists to better assess the composition of this enigmatic object, whose origin is unknown, Nasa said.

Comparing Maven’s images and spectra of the surface of Phobos to similar data from asteroids and meteorites will help planetary scientists understand the moon’s origin — whether it is a captured asteroid or was formed in orbit around Mars.
The Maven data, when fully analysed, will also help scientists look for organic molecules on the surface.
Evidence for such molecules has been reported by previous measurements from the ultraviolet spectrograph on the Mars Express spacecraft, according to the US space agency said.

The observations were made by the Imaging Ultraviolet Spectrograph instrument aboard Maven.

Feds Put AI in the Driver’s Seat


The artificial intelligence component of Google’s Level 4 autonomous cars can be considered the driver, whether or not the cars are occupied by humans, the U.S. National Highway Transportation Safety Administration said in a letter released Tuesday.

Level 4 full self-driving automation vehicles perform all safety-critical driving functions and monitor roadway conditions for an entire trip.

Google’s L4 vehicle design will do away with the steering wheel and the brake and gas pedals.

Current U.S. Federal Motor Vehicle Safety Standards, or FMVSS, don’t apply because they were drafted when driver controls and interfaces were the norm and it was assumed the driver would be a human, the NHTSA wrote to Chris Urmson, who heads Google’s Self-Driving Car Project.

Those assumptions won’t hold as autonomous car technology advances, and the NHTSA may not be able to use its current test procedures to determine compliance with the safety standards.

Google is “the only company so far committed to L4 because their objective is to completely eliminate the human, and thus human error, from driving,” said Praveen Chandrasekar, a research manager at Frost & Sullivan.

“Ford and GM are thinking about similar levels” of automation, he told .

Safety Standards

Google had provided two suggested interpretations of what a driver is and one for where the driver’s seating position is, and then applied these approaches to various provisions so its self-driving vehicle design could be certified compliant with FMVSS.

“The next question is whether and how Google could certify that the SDS (self-driving system) meets a standard developed and designed to apply to a vehicle with a human driver,” the NHTSA wrote. It must have a test procedure or other means of verifying such compliance.

The NHTSA’s interpretation “is significant, but the burden remains on self-driving car manufacturers to prove that their vehicles meet rigorous federal safety standards,” U.S.Transportation Secretary Anthony Foxx said Wednesday.

The NHTSA’s interpretation is “outrageous,” said John Simpson, a consumer advocate at Consumer Watchdog.

“Google’s own numbers reveal its autonomous technology failed 341 times over 15 months, demonstrating that we need a human driver behind the wheel who can take control. You’ll recall that the robot technology failed 241 times, and the human driver was scared enough to take control 69 times,” he told.

Unresolved Issues

Many of Google’s requests “present policy issues beyond the scope and limitations of interpretations and thus will need to be addressed using other regulatory tools or approaches,” the NHTSA stated.

They include FMVSS No. 135, which governs light vehicle brake systems; FMVSS No. 101, which covers controls and displays; and FMVSS No. 108, governing lamps, reflective devices and associated equipment.

In some cases, Google might be able to show that certain standards are unnecessary for a particular vehicle design, but it hasn’t yet made such a showing.

Google may have to seek exemptions to prove its vehicles meet FMVSS standards as an interim step because the NHTSA’s interpretations don’t fully resolve all the issues raised.

“All kinds of people are working on L4 cars, and there’s an indication the NHTSA’s going to be relatively accommodating with the granting of exemptions,” said Roger Lanctot, an associate research director at Strategy Analytics.

“The orientation of NHTSA is strongly toward taking the driver out of the driver seat,” he told.

Insurance and Liability

Current FMVSS rules will have to change to accommodate Google’s request, which will see changes in auto insurance, Frost & Sullivan’s Chandrasekar predicted, because “currently, insurance is decided based largely on the driver and minimally on the vehicle.”

Further, liability “is a huge factor, and that will need to be carefully analyzed as OEMs will end up being largely responsible,” he said. “This is why OEMs like Volvo, Audi and Mercedes-Benz have stated that in their L3 vehicles they’ll assume all liability when the vehicle is driving itself.”

Smart Email and the Path to Digital Immortality


I attended IBM Connect last week, where I checked out one of the most interesting products you’ve likely never heard of — a new email offering called “IBM Verse.” While there was a lot of discussion about how it better integrated social networking, what really intrigued me was the idea of putting cognitive computing inside an email client.

“Cognitive computing” is the new way of saying “artificial intelligence,” because, you know, the industry likes to change terms every once in a while just to mess with our heads. Regardless of what it’s called, thinking email could be incredibly powerful.

I’ll close with my product of the week, which has to be IBM Verse, the fascinating email product that focuses on the user. If I don’t tell you about it, you’ll likely never hear of it.

Email That Thinks

A lot of what we do with email is repetitive. That’s why executives in the past rarely handled their own correspondence; their secretaries would do it for them. Secretaries, apprentices or assistants set up meetings, offered birthday wishes, responded to inquiries — even sent direct messages. They often still do, which makes those roles especially powerful.

The fact is, if you get an email from a politician, chances are pretty good that it wasn’t written by that politician. It might not have been written by a human at all — but rather by some machine regurgitating the same text over and over again, mostly to annoy us.

If you could make an email system smart, it could do not only what secretaries used to do, but also a whole lot more — and likely better. You see, a human assistant typically would not be privy to all of your email or other expressions of your thoughts. An assistant might not know all of your friends or family, and certainly wouldn’t be well versed in your private and personal life.

An email system generally will handle most all of your daily correspondence, though, and if it were a smart email system tied into social networking, then over time, it likely would come to know you better than you know yourself.

As it gained insight, it not only could prioritize messages and automatically handle tasks like setting and changing appointments, but also could begin to respond for you, if you let it. You could opt to increase its responsibilities with your oversight.

Such a system could remove email as a chore for most of us, eliminate virtually all repetitive emails, and even allow us to be more accurate when dictating responses to email over our phones while driving. We could just give a command to write a response with key elements and let the system do the rest.

Valuable Advice

One of the big advantages of an intelligent email system would be dynamic advice. The system would be reading an email as it was created. If you’re like most of us, from time to time, you have written an email you later regretted sending. Through routine monitoring, a smart system could make suggestions on how to alter tone and reword a message to better accomplish your goal, or just notify you that what you’re writing could be deadly to, pick one, your career, marriage, relationship, safety or freedom.

I imagine that type of feature would be pretty useful on Twitter. In any case, it not only could act in your stead, but also could help you communicate more effectively and either keep you out of trouble or perhaps intercede after the fact.

Take this hypothetical alert, for example: “Email was not sent. It was determined that the racially and sexually insensitive material you were about to send to everyone in the company would result in a catastrophic response you may have not considered. Oh, and you forgot to capitalize Assh*le.” Could be incredibly valuable by itself.

Digital Immortality

Let’s push the envelope a bit. There are a number of projects designed to create an immortal digital concept of a person — a digital avatar, if you like. At the core of these projects is some process to capture what makes every person unique. The easiest way to do that would be to mine a person’s email for insights into personality, speech patterns, history and knowledge.

By increasingly being able to emulate someone, a smart email system eventually could create a decent digital clone that initially could interact over email, and perhaps with a good sound sample from the individual and the right speech integration, also do a pretty decent job of vocal emulation.

Imagine being able to send an email to a company founder who has died, asking for advice on a question of strategy or direction. Granted, the system might stay a bit stuck in time, given that it wouldn’t be able to create the source’s response on issues that were unknown during the individual’s lifetime, but enhancements over time likely could emulate those responses as well, creating a thinking, learning, growing version of the departed executive.

Let’s take Steve Jobs, for instance. I’ll bet Apple’s executive staff would like to have a chat with him from time to time, and if the Steve Jobs avatar were made visible, it likely could not only launch new products, but also interact with an audience.

Much of the email correspondence that would make such a thing possible still exists, and there is a chance that a digital version of Steve could be created from those records at some future point.

Wrapping Up: Email Smarter Than You Are

It does strike me that with smart TVs, smart cars, smartphones, and now smart email, there could come a time when we may not be smart enough ourselves, and we’ll need a significant upgrade.

Until then, things like smart email could serve as the bridge that frees up our time and keeps us from doing certain incredibly stupid things, like writing an email while angry.

Still, I can’ t help but wonder how long it will be before one of those smart things decides we’re too stupid to interface with it.

Like a lot of you, I live in email. However, we really haven’t seen much of an improvement in email since Outlook was launched in the 1990s. Granted, that’s in large part our fault, as we really don’t like change much. Still, it is well past time that someone came up with a very different idea.

What IBM Verse does is funnel your email accounts and social network feeds into one client. It then learns to organize your communications based on priority. No more last in first out — you see your important stuff up front and can blow off your unimportant stuff more easily.

IBM Verse

IBM is adding cognitive capability to the product, but it is far less capable than the imaginings I indulged in above. Right now, it can provide assistance with the tone and structure of an email you’re drafting, but as Watson becomes more capable, I expect that enhanced capabilities are in its future.

You have to see this product to appreciate it, though, as it would change your email experience substantially, and it could make you far more efficient and effective in handling written communications.

Ironically — at least for me, given that one of the projects I worked on while at IBM was voice mail integration with email — it doesn’t have that feature. Still, in most other ways it is a huge step forward in how email is handled. Because IBM Verse rethinks email, and I live in email, it is my product of the week.

Google to Put Self-Driving Cars Through Rainy-Day Paces


Google on Wednesday announced that it has chosen Kirkland, Washington, as the next location to test its self-driving cars.

It picked Kirkland as the third test city to give the cars more experience driving in new environments, traffic patterns and road conditions, the company said.

Google has conducted testing mainly at or near its campus in Mountain View, California. Last year it expanded to Austin, Texas.

Its self-driving cars have racked up 1.4 million miles, the company said, adding that people in Kirkland soon may be able to catch a glimpse of the latest test vehicle, a Lexus RX450h.

The move to Kirkland will allow the autonomous team to experience different — notably wetter — conditions, while the area outside of Seattle is known for its winding roads and quick changes in elevation, according to Google.

Real-World Conditions

Testing in varied weather and road conditions is considered crucial in the development of autonomous vehicles.

“Google has not conducted trials on public roads in areas outside California or Texas, where the weather is primarily clear,” said Sam Barker, a Juniper Research analyst.

“The big criticism that Google has been facing on its self-driving trials, despite clocking over 1 million miles, is the fact that most of it has been done in California where the weather is one dimensional throughout the year,” noted Praveen Chandrasekar, automotive and transportation research manager atFrost & Sullivan.

“The decision shows that Google is confident that the systems are able to stand up to adverse weather,” Barker told.

Diverse Conditions

Google’s expanded testing in Kirkland follows Ford’s announcement that it began testing in snow and icy conditions at the Mcity facility at the University of Michigan’s Mobility Transformation Center near Detroit.

Ford highlighted its efforts at last month’s North American International Auto Show.

Google isn’t “alone in testing against weather conditions. Ford claimed earlier this year that their autonomous vehicles were able to operate in snow when tested, being able to do so by mapping the area beforehand. However, it is understood that these tests were undertaken in a controlled environment,” said Barker, author of the report “Autonomous Vehicles: Adoption, Regulation & Business Models 2015-2025.”

Road Rules

Weather isn’t the only consideration in determining where to test autonomous vehicles.

“The California DMV’s proposed rule of having a driver behind the wheel might make it tough for Google to sustain its testing efforts only in California,” Chandrasekar told.

Opting for other testing locations also provides for greater climate and environmental diversity.

Google needs “more locations that present them with dynamic weather — like the rain in Washington — to calibrate the sensors and make sure the sensor fusion is providing the intended results, have an opportunity to improve coverage of their HD maps, and use the different road conditions — slopes in Washington — to understand real-world performance,” Chandrasekar said.

“This is basically an effort to get as close to real-world testing as possible before the different states start passing individual regulations that might prove to be a challenge to Google, like in California, for its completely driver-free self-driving cars,” he added.

Less Visibility

Varied weather will be crucial as autonomous vehicles rely on a number of advanced sensors. Just as weather can affect a human driver’s ability to see the road, it too can affect how the vehicle’s sensors operate.

“Systems such as Lidar have difficulty in differentiating between genuine obstacles and weather conditions, and camera-based systems are unable to see road markings or signs,” said Juniper Research’s Barker. “Ensuring autonomous systems are able to stand up to a change in weather conditions is the one of the hurdles facing those in development.”