05-14-2017, 11:35 PM
Michael Dell said his company is seeing some customers return to buying equipment after discovering that working with the giant cloud-computing providers is more complicated and costlier than they thought. Hardware from Dell Technologies Inc. is getting a second look from companies finding that outsourcing their computing needs to the public cloud—big providers such as Amazon.com Inc., and Microsoft Corp.—can be twice as expensive or more as running it themselves, Dell said Monday. While he didn't provide specifics, Dell added "it's not small numbers; it's large numbers."
Dell Says 'Large' Number of Companies Return after Pricey Public Cloud - Bloomberg
Box, a popular cloud storage company for businesses, now employs machine learning to allow its service to peek into files and figure out what they contain. The idea is to let users search for, say, every press release related to a particular product, or the minutes from a string of important meetings, or even photos of the CEO dancing at the last office holiday party. This would be a compelling new way for individuals and companies to search for and organize their data. It also shows the potential for cutting-edge AI techniques to change the nature of everyday office work.
Rejoice, Disorganized Workers: This Smart Cloud Looks After Your Files For You
Cloudera Inc., the big-data company backed by Intel Corp., hired underwriters for an initial public offering that could come as soon as this year, people with knowledge of the matter said. The company, based in Palo Alto, California, is eyeing a valuation of about $4.1 billion, said the people, in line with what it fetched in its last private round three years ago. Cloudera notified a number of firms this month that they’d been picked to lead the IPO, said the people, who asked not to be identified because the information is private.
Cloudera Said to Choose Banks for IPO as Soon as This Year - Bloomberg
We’ve been using compute-intensive machine learning in our products for the past 15 years. We use it so much that we even designed an entirely new class of custom machine learning accelerator, the Tensor Processing Unit. Just how fast is the TPU, actually? Today, in conjunction with a TPU talk for a National Academy of Engineering meeting at the Computer History Museum in Silicon Valley, we’re releasing a study that shares new details on these custom chips, which have been running machine learning applications in our data centers since 2015. This first generation of TPUs targeted inference (the use of an already trained model, as opposed to the training phase of a model, which has somewhat different characteristics), and here are some of the results we’ve seen
Google Cloud Platform Blog: Quantifying the performance of the TPU, our first machine learning chip

