Select Page
Personal Digital Assistants (PDAs) Are Making Our Lives Simpler

Personal Digital Assistants (PDAs) Are Making Our Lives Simpler

Back in the 1990s the IT world came with an idea of the personal digital assistants (PDAs). At that time this concept start quickly becoming popular because most of the PDAs combined many features of cell phones, and they were also called “the first smartphones,” yet PDAs were handy at managing personal and business lives.

IBM was the first company to introduce the world’s first PDA with full cell phone functionality. They called their innovation the IBM Simon, which is also considered the first smartphone. Simon featured many applications such as calendar, appointment scheduler, world time clock, notepad and other apps that aimed to keep users’ lives organized.

Then in 1996, the mobile giant Nokia, produced the world’s best-selling PDA, the 9000 Communicator, with full cell phone functionality. Later the same year, Palm Computing presented the first generation of their PDAs, which were a great solution for someone who wanted to keep organized their busy lives.

Fifteen years later, , which made impossible things real. It allowed people speak their search queries into a phone, and Siri would understand and perform their queries for them. IPhone users started to feel some sort of emotional attachment to their devices.

After this enormous success, Apple’s main rival – Google – launched the feature in 2012, and then Microsoft developed Cortana personal assistant. In 2015, . So in spring 2016, MYLE will reveal its first intelligent personal assistant, which has an ability to outpace any other solutions on the market.

Many companies are interested in the PDAs market, and as according to recent research, nearly 39% of smartphone owners use some sort of PDAs such as Google Now or Siri. This trend will be rising over the next five years.

Most of the PDAs are in favour for many entrepreneurs because they can practically organize their day: suggesting routes for travel, advising on current weather conditions, finding the best restaurants based on the previous history, conducting faster web searches and setting up meetings. They also help by suggesting places for business meetings, remembering things like important thoughts and crucial business ideas.

Majority of the contemporary PDAs could be controlled by voice or through manual data input.

However, those features above are only one drop in the ocean compared to what they will do for its users in the next couple of years. Other key features of the PDAs are their integration into different systems and apps, such as Slack, Wunderlist, Evernote or Salesforce.

For instance, in order to save notes, ideas, to-dos and other tasks to Evernote, a MYLE user should only say those commands to the device. It will automatically recognize and understand the words, convert speech-to-text and categorize all users’ commands into assigned applications.

However, MYLE device is not limited to these functions. Moreover, its smart AI algorithm will enhance itself to better understand users’ behaviour using the latest breakthroughs in machine learning.
MYLE sets a purpose for its PDA, namely, to take over routines of many entrepreneurs to free their tight schedule for leisure time, family and other activities.

PDAs are powered by artificial intelligence and they have already become an essential feature for many businesses and consumers.

MIT Team Builds An Energy-Friendly Chip to Perform Powerful AI Tasks

MIT Team Builds An Energy-Friendly Chip to Perform Powerful AI Tasks

The team from Massachusetts Institute of Technology (MIT) has built an energy-friendly chip that performs powerful artificial intelligence (AI) tasks, which is specially designed to implement neural networks.

According to MIT researchers this chip is 10 times as efficient as a mobile GPU (Graphics Processing Unit), that is, it could enable mobile devices to run powerful AI algorithms locally, rather than uploading data to the Internet for processing.

Vivienne Sze, the Emanuel E. Landsman Career Development Assistant Professor in MIT’s Department of Electrical Engineering and Computer Science whose group developed the new chip, says that currently most of the networks are pretty complicated due to their high-power GPUs.

Sze states the reason why it’s better for devices to operate locally rather than via the Internet: your cell phone still powerfully operates even if you don’t have a WI-Fi connection nearby required to process large amount of information. You have much better privacy of your information, and another reason is the avoidance of any transmission latency, so that you can react much faster to certain applications.

MIT researches have presented the new chip at the “International Solid State Circuits Conference” in San Francisco.

The new chip is called “Eyeriss,” and according to MIT – “its key to efficiency is to minimize the frequency with which cores need to exchange data with distant memory banks, an operation that consumes a good deal of time and energy.”

Moreover, many of the cores in a GPU share a single, large memory bank while each of the Eyeris’s cores has its own memory. Thus, the chip is capable of compressing data before sending it to individual cores.

The Eyeriss chip has 168 cores, which could communicate directly with each other, so that if they need to share data between each other, they don’t have to route it through main memory.

“This is essential in a convolutional neural network, in which so many nodes are processing the same data.”

Neural nets, also known under the name “deep learning,” were widely studied in the early days of AI, however, by the 1970s, the researchers had thrown this field out of favour.

Sze says the deep learning is useful for many applications, such as speech and object recognition, and face detection.

An Introduction to Machine Learning (ML).

An Introduction to Machine Learning (ML).

We’re starting this series of articles about the discipline of Machine Learning with an opening question from Tom Mitchell:

“How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?”

According to Mitchell, this question covers a broad range of learning tasks, and it really does. Try to think, How to develop a system that will automatically be able to learn its environment in the settings, which the humans encounter with on a daily basis?

Which algorithms should the computer science engineers write so that the system is building rationale predictions and conclusions, herewith in the case of an error the system could identify and remember this problem so avoiding it in the future.

For example, can we develop an autonomous system that’s being able to build the accurate predictions to determine which medications will best impact the patient’s treatment based on the gained observations and data?

We can consider such question in this meaning, as of Mitchell states:

“A machine learns with respect to a particular task T, performance metric P, and type of experience E, if the system reliably improves its performance P at task T, following experience E. Depending on how we specify T, P, and E, the learning task might also be called by names such as data mining, autonomous discovery, database updating, programming by example, etc.”

Many specialists say that the machines should learn in the way as humans do – through some observations, data checking, use cases, experiences or direct instructions.

So basically, the principles of ML are to learn the processes to perform them subsequently better and more accurate in the future. This is what MYLE team aims to build – an assistant powered with the algorithm that learns your speech manner, financial behaviour, your communication habits and other important aspects of your life to make accurate predictions that will enhance your life.

Also, ML refers to a core subarea of artificial intelligence (AI). Apparently you cannot build an intelligent system if it doesn’t automatically operate without human intervention or assistance.

Thus, ML is a core element of making the AI because AI must, for example, process language just as humans do. Humans sometimes communicate using extremely complex sentence structures with hidden meaning. Moreover, still yet it has been challenging to develop a computer system that would be capable of understanding those phrase patterns.

This is where a truly intelligent system begins – with the ability to understand and use gained knowledge to simplify and enhance existing processes. Hence, we cannot consider a system to be truly intelligent if it’s unable to learn since learning is a universal process of gaining knowledge or skills by studying, practicing or experiencing something.

Since the last decade, engineers have written thousands of ML algorithms, and all of them consisted of these three essential components – important to mention that these three components are the framework of any ML algorithm:

  • Representation: how to represent knowledge.
  • Evaluation: the way to evaluate candidate programs (hypotheses).
  • Optimization: the way candidate programs are generated known as the search process.

However, each ML algorithm could be taught by thousands of different methods. Albeit, only four methods are commonly used today.

Supervised learning is a learning model where a training process requiring to make predictions and is corrected when those predictions are wrong.

The training process continues until the model achieves the desired level of accuracy on the training data. The goal of this model is to get the system to learn a classification system that we have created and presented to the system. One of the most typical examples of the supervised learning is the digit recognition.

This learning model is especially useful when it’s easy to determine the problem. However, if a machine is unable to find a common pattern in the given problem, then it’s more likely not to identify the problem. Hence, this learning method is not practical for the systems in constantly changing environments

Unsupervised learning is a learning model where input data is not labelled and without a known result. This method seems more intriguing and harder because the main purpose – to make a system learn a process that we don’t tell it how to do.

This learning model has two approaches to teaching the machine how to do certain tasks.

The first approach is to teach the system by not giving straightforward classifications, but instead using some sort of reward approach to indicate its success. It should be mentioned that this approach fits into the decision problem framework because the primary objective is not to produce classification rather make correct decisions to maximize rewards.

It’s interesting that this approach closely simulates the real-world environment, where the system has a motivation to be rewarded and is also afraid of being punished for not doing the certain task or doing it wrong.

The second approach of this learning model is called clustering. That is, the system’s primary objective is to find similarities in data. This approach could be well used when there is sufficient data; for instance, social information filtering algorithms, such as those that use Amazon in its books recommendations.

Their algorithms are based on classifying and finding those clusters of the similar groups of people that read the same books or fit into the specific category of readers.

Semi-supervised learning is the model that combines both labelled and unlabelled methods.

In the case of semi-supervised learning, the system is given a desired prediction for the certain problem. However, it has to do self-learning to be able to organize the data and make predictions.

Reinforcement learning is the learning model that allows the system to determine automatically the ideal behaviour within a specific task to maximize its performance. That is, the system is not told which actions to take, but instead it must decide which actions generate the most reward.

This model is using the Markov Decision Methods because in some cases actions may affect not only the immediate reward but also the following situation and through that all subsequent rewards.

One of the good examples to understand reinforcement learning could be this (retrieved form University of Alberta):

A mobile robot decides whether it should enter a new room in search of more trash to collect or start trying to find its way back to its battery recharging station. It makes its decision based on how quickly and easily it has been able to find the recharger in the past.

In our next article we’re going to look at some key elements, history and challenges related to Machine Learning.

WordPress Lightbox