Zurich based IBM Researchers have developed a new generic artificial Intelligence building block which has proven to accelerate Big Data machine learning algorithms by at least 10 times compared to previously used methods.
For their initial demonstration, the researchers used a single Nvidia Quadro M4000 GPU with 8 gigabytes of memory training on a 30-Gbyte data set of 40,000 photos using a support vector machine (SVM) algorithm that resolves the images into classes for recognition.
The key to the technique is preprocessing each data point to see if it is the mathematical dual of a point already processed. If it is, then the algorithm just skips it, a process that becomes increasingly frequent as the data set is processed.
“If you can fit your problem in the memory space of the accelerator, then running in-memory will achieve even better results,” Parnell told EE Times. “So our results apply only to Big Data problems. Not only will it speed up execution time by 10 times or more, but if you are running in the cloud, you won’t have to pay as much.”
Simply put IBM have created a block which utilizes mathematical duality to filter out the important information from the the not so important bits. As the data is processed the less important data increases and thus processing times are significantly decreased, by up to 10 times .
This particular algorithm is being developed for use on IBM’s data cloud which is recommended for use by companies dealing with big data sets such as Social Media data, advertising targeting, fraud detection, profile analysis etc.
Though there is still a long way to go for AI and deep learning development, it is clear the race is on between major software and hardware developers around the World to be the first to make the major breakthrough into the brave new world of AI.
Reference: EE Times