Wednesday, 23 December 2020

An Internet of Things and Machine Learning Based System to Measures Precursors of Epileptic Seizures by Bahman Zohuri in JBGSR

An Internet of Things and Machine Learning Based System to Measures Precursors of Epileptic Seizures by Bahman Zohuri in Open Access Journal of Biogeneric Science and Research (JBGSR)

Abstract

Artificial intelligence is a new phenomenon that has occupied a prominent place in our present lives. Its presence in almost any industry that deals with any huge sheer volume of data are taking advantage of AI by integrating it into its day-to-day operation. AI has predictive power based on its data analytic functionality and some levels of autonomous learning, which its raw ingredient is just the massive sheer volume of data. Artificial intelligence is about extracting value from data, which has become the core business value when insight can be extracted. AI has various fundamental applications. This technology can be applied to many different sectors and industries. There has been a tremendous use of artificial intelligence in Nanotechnology research during the last decades. Convergence between artificial intelligence and Nanotechnology can shape the path for various technological developments and a large variety of disciplines. In this short communication, we present such innovative and dynamic sites utilizing artificial intelligence and its sub-sets of machine learning driven by deep learning in Nanotechnology.

Keywords: Artificial Intelligence; Machine Learning; Deep Learning; Nanoscience; Nanotechnology; Atomic Force Microscopy; Simulations, Nano computing

Abbreviations: AI: Artificial Intelligence; AFM: Atomic Force Microscope; STM: Scanning Tunneling Microscope; ML: Machine Learning; DL: Deep Learning; BMI: Brain Machine Interfaces; EEG: Electroencephalograph; HPC: High-Power Computing; PSPD: Position-Sensitive Photodiode; IoT: Internet of Things; TCO: Total Cost of Ownership; ROI: Return on Investment

Introduction

When Richard Feynman (Figure 1), an American Physicist and winner of Nobel prize and physics professor at California Institute Technology (CalTech), gave a talk under the title of “There is Plenty of Room at the Bottom” [1] at an American Physical Society (APS) meeting during December 29th, 1959, at CalTech, California, the door to ideas and concepts behind nanoscience and nanotechnology, just got opened. Of course, this talk was way before the term nanotechnology was used in our ordinary daily English language.

 

In his talk, Feynman described a process in which scientists would manipulate and control individual atoms and molecules. Over a decade later, in his explorations of ultraprecision machining, Professor Norio Taniguchi coined the term nanotechnology. It was not until 1981, with the development of the scanning tunneling microscope that could "see" individual atoms, that modern nanotechnology began.

 

Anytime, these days, when talking about nanotechnology, we should think about how small things can be, such as an atom of any element or from a scaling point of view, what would be the “Size of the Nanoscale” or basically, just how small is “nano?” and what can we imagine about the scale of from Microscopic perspective. From metric MKS unit dimensional point of view or International System Units (ISU), the prefix “nano” means one-billionth or 10-9; therefore, one nanometer is one-billionth of a meter. It’s difficult to imagine just how small that is, thus, here we are presenting some examples to clear the matter better [2].

 

1. A sheet of paper is about 100,000 nanometers thick.

2. A human hair is approximately 80,000- 100,000 nanometers wide.

3. A single gold atom is about a third of a nanometer in diameter.

4. On a comparative scale, if a marble diameter were one nanometer, then the diameter of the Earth would be about one meter.

5. One nanometer is about as long as your fingernail grows in one second.

6. A strand of human DNA is 2.5 nanometers in diameter.

7. There are 25,400,000 nanometers in one inch.

 

The illustration in (Figure 2) has three visual examples of the size and the scale of nanotechnology, showing just how small things at the nanoscale are. Nanoscience and nanotechnology involve the ability to see and to control individual atoms and molecules. Everything on Earth is made up of atoms - the food we eat, the clothes we wear, the buildings and houses we live in, and our bodies. But something as small as an atom is impossible to see with the naked eye. In fact, it’s impossible to see with the microscopes typically used in a high school science class. The microscopes needed to see things at the nanoscale were invented in the early 1980s.

 

Once scientists had the right tools, such as the Scanning Tunneling Microscope (STM) and the Atomic Force Microscope (AFM), the age of nanotechnology was born. Although modern nanoscience and nanotechnology are quite new, nanoscale materials were used for centuries. Alternate-sized gold and silver particles created colors in medieval churches' stained-glass windows hundreds of years ago. The artists back then did not know that the process they used to create these beautiful works of art actually led to changes in the composition of the materials they were working with.

 

Today's scientists and engineers are finding a wide variety of ways to deliberately make materials at the nanoscale to take advantage of their enhanced properties such as higher strength, lighter weight, increased control of light spectrum, and greater chemical reactivity than their larger-scale counterparts [3]. 

Figure1: Professor Richard Feynman.

Figure 2: The Scale of Things [2].

 

Bottom line, Nanotechnology is the use of matter on an atomic, molecular, and supramolecular scale for industrial purposes, and it is the design, production, and application of structures, devices, and systems by manipulation of size and shape at the nanometer scale. Thus, when it comes to nanoscience and nanotechnology, we can say that when and where, we are dealing with smallest scale size, “Small is Powerful”.

 

 However, when we are dealing with small, in particular at the scale of atom size, then for the purpose of all practical situations and applications, in particular in field medicine, we are encountering, with share volume of data that we need to collect and be able to analyze these data to the point of real-time speed. So requires a lot of data analytics and data mining, where such practice is beyond human capacities. With Artificial Intelligence (AI) technology and its technical thrives in the past decade, we can turn to this partner for help. By now and today’s standard of AI implementations, practically cross an industry, we know that AI and human brain complement each other at evMKSery level of business [4].

 

For example, in biomedical, using artificial intelligence for drug discovery is the practice of using computational methods to research new pharmaceuticals, and repurpose existing compounds for new use cases is a common practice for drug discovery. Drug Discovery with Artificial Intelligence (AI) and its two sub-components, namely Machine Learning (ML) and Deep Learning (DL) can help to improve productivity and ensure regulatory compliance, transform data at the speed of your computer Central Processing Unit (CPU), digital at scale and speed (i.e., nanotechnology), optimizing your business.

 

Since today we encounter with the fast-paced and share volume of data in our day-to-day operation, Artificial Intelligence (AI) is already changing the way we think and operate. But what is the full potential of this technology, and how best can you realize it? In the field of medicine, integrating artificial intelligence and nanotechnology enhances better precision cancer medicine. Artificial intelligence (AI) and nanotechnology are two fields that are instrumental in realizing the goal of precision medicine—tailoring the best treatment for each cancer patient.

 

A confluence of technological capabilities is creating an opportunity for machine learning and Artificial Intelligence (AI) to enable “smart” nanoengineered Brain Machine Interfaces (BMI). This new generation of technologies will be able to communicate with the brain in ways that support contextual learning and adaptation to changing functional requirements. This applies to both invasive technologies aimed at restoring neurological function, as in the case of the neural prosthesis, as well as non-invasive technologies enabled by signals such as electroencephalograph (EEG).

 

Advances in computation, hardware, and algorithms that learn and adapt in a contextually dependent way will be able to leverage the capabilities that nanoengineering offers the design and functionality of BMI.

 

 

These are a few examples that Artificial Intelligence and Nanoscience and Nanotechnology can shake a hand and augment each other very well. In the next few sections of this short paper, we try exploring this opportunity and collaboration between these two technologies.

What is Artificial Intelligence, Machine Learning, and Deep Learning?

The past decade has encountered a new revolutionary technology that seems to have many applications across the entire industry. This innovative technology is called Artificial Intelligence that has been driving Business Intelligence to a different level, considering any business operation with a magnitude of incoming data to be analyzed. Day-to-day of these business operations with a share volume of data (i.e., Big Data) requires augmentation of AI in conjunction with High-Performance Computing (HPC). Artificial Intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals. In other words, AI that is the new buzzword of the market of technology is the science of making machines as smart and intelligent as humans as an ultimate goal, to the point that we go from a weak AI to Super-AI.

 

Such progression within the domain of AI by definition is the computer algorithm or program's ability, particularly in High-Power Computing (HPC) or machine, to think and learn very similar to the human being. mAI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we gather information, analyze the data, and utilize the resulting insight to have a better-informed decision. These days, AI is essential since the amount of data generated by humans and machines far outpaces humans' ability to absorb and interpret the data and make complex decisions based on that data.

 

AI is increasingly a part of our everyday environment in systems including virtual assistants, expert systems, and self-driving cars. Nevertheless, the technology is still in the early days of its development. Although they may vary in terms of their abilities, all current AI systems are examples of weak AI. The field of artificial intelligence moves fast. Progress in this field has been breathtaking and relentless. Five years from now, the field of AI will look very different than it does today. It is getting smarter while It is the best at everything — mathematics, science, medicine, hobbies, you name it. Even the brightest human minds cannot come close to the abilities of super AI.

 

With this basic understanding of AI, there are certain key factors one should know about AI (Figure 3)

1.       It is essential to distinguish different types of Artificial Intelligence and different phases of AI evolution when it comes to developing application programs.

2.       Without recognizing the different types of AI and the scope of the related applications, confusion may arise, and expectations may be far from reality.

3.       In fact, the "broad" definition of Artificial Intelligence is "vague" and can cause a misrepresentation of the type of AI that we discuss and develop today.

 

 

To understand How Artificial Intelligence works, one needs to deep dive into the various sub-domains of Artificial Intelligence and understand how those domains could be applied to the industry's various fields.

Figure 3: The Pyramid of AI, ML and DL.

What is Machine learning?

Machine learning is the branch of artificial intelligence that holistically addresses to build computers that automatically improve through experience. Indeed, machine learning is all about the knowledge from the data. It is a research field at the intersection of statistics, artificial intelligence, and computer science and is also known as predictive analytics or statistical learning. Indeed, machine learning's main idea is that it is possible to create algorithms that learn from data and make predictions based on them. Recent progress in machine learning has been driven by developing new learning algorithms and theory and the ongoing explosion in online data availability and low-cost computation.

 

With immense data growth, machine learning has become a significant and key technique in solving problems. Machine learning finds the natural pattern in data that generates insight to help make better decisions and predictions. It is an integral part of many commercial applications ranging from medical diagnosis, stock trading, energy forecasting, and many more.

 

 

Consider the situation when we have a complicated task or problem involving many data with lots of variables but with no existing formula or equation. Machine learning is part of a new employment dynamic, creating jobs that center around analytical work augmented by Artificial Intelligence (AI). Machine Learning (ML) provides smart alternatives to analyzing vast volumes of data. ML can produce accurate results and analysis by developing fast and efficient algorithms and data-driven models for real-time data processing.

What is Deep Learning?

Deep Learning (DL) is the subset of machine learning that, on the other hand, is the subset of artificial intelligence. The structure of the human brain inspires deep learning. Deep learning algorithms attempt to draw similar conclusions as humans would by continually analyzing data with a given logical structure. To achieve this, deep learning uses a multi-layered structure of algorithms called neural networks. Just as humans use their brains to identify the patterns and classify the different types of information, neural networks can be taught to perform the same data tasks.

 

Whenever humans receive new information, the brain tries to compare it with known objects. Deep neural networks also use the same concept. By using the neural network, we can group or sort the unlabeled data based on similarities among the samples in the data. Artificial neural networks have unique capabilities that enable deep learning models to solve tasks that machine learning models can never solve.

 

 

One of the main advantages of deep learning lies in solving complex problems that require discovering hidden patterns in the data and/or a deep understanding of intricate relationships between a large number of interdependent variables. When there is a lack of domain understanding for feature introspection, Deep Learning techniques outshine others, as you have to worry less about feature engineering. Deep Learning shines when it comes to complex problems such as image classification, natural language processing, and speech recognition.

Artificial Intelligence’s Application in Nanotechnology

Artificial intelligence has been an increasingly growing area for many decades now, not just within itself where the areas of Machin learning, Deep learning, and artificial neural network work simultaneously, but also in the number of fields and industries that they are now prevalent in. Nanoscience and nanotechnology are the study and application of tiny things. there are some growing areas where AI converges with nanotechnology.

 

 

During the last decade, there has been increasing use of artificial intelligence tools in nanotechnology research. In this paper, we review some of these efforts in the context of interpreting scanning probe microscopy, simulations, and nanocomputing. 

The Application of AI in Atomic Force Microscopy (AFM)

Atomic force microscopy is the most versatile and powerful microscopy technology used to study different samples at the nanoscale. It is considered versatile since it can image in three-dimensional topography and provides various surface measurements for scientists' and engineers' needs. Although atomic force microscopy is considered a significant advance in recent years, it has the challenge to get high quality-signals imaging devices. The predominant problem in atomic force microscopy is that many of the tip-sample interactions these microscopes rely on are complex, varied, and therefore not easy to decipher, especially when trying to image samples at the nanoscale and manipulate atomic level.

 

The AFM uses the cantilever with a very sharp tip to scan over the sample surface (Figure 4) as the tip approaches the surface, the attractive close-range force between the surface and the tip makes the cantilever deflect toward the surface. However, as the cantilever is brought closer to the surface such that the tip makes contact with it increasingly, the repulsive force takes over and causes the cantilever to deflect away from the surface.

 

A laser beam is used to detect the cantilever deflection toward or away from the surface by reflecting an incident beam of the cantilever's flat top. Any cantilever deflection will cause slight changes in the deflection of the reflected beam. A Position-Sensitive Photodiode (PSPD) is used to track this change. Thus, if an AFM tip passes over the raised surface feature, the resulting cantilever deflection and the subsequent change in the reflected beam's direction are recorded by the PSPD.

 

The AFM images the topography of a sample surface by scanning the cantilever over the region of interest. The raised and lowered features on the sample surface impact the cantilever's deflection, which is monitored by the PSPD. Using the feedback loop to control the height of the tip above the surface thus maintains a constant laser position. The AFM can generate an accurate topographic map of the surface features.

 

 

AI can be beneficial in dealing with these kinds of signal-related issues. The AI approach, known as functional recognition, is used to address this issue by directly identifying local actions from measured spectroscopic reactions. Artificial Neural Network’s (ANN's) can recognize the local behavior of the material being imaged, leading to a simplification of the data and a reduction in the number of variables that need to be considered. Overall, it leads to a much more efficient imaging system.

Figure 4: The AFM Principle.

Artificial Intelligence in Nanoscale Simulations

Simulation of the system is one of the main problems scientists must face while working at the nanoscale. The nanoscopic and microscopic image difference is that the real optical images cannot be obtained at the nanoscale, so the images in this scale need to be interpreted, and numerical simulations are mostly the best solutions.

 

A number of applications and programs are used to simulate the systems where the atomic effects are present accurately. If these applications are correctly employed, these techniques would be instrumental in getting the precise and valuable idea of what is present in the image. But in many cases, it can be a complicated task to utilize, and many parameters need to be taken into account to achieve an accurate representation of the system. In this case, AI can be used to develop the simulations' quality and make them much easier to obtain and interpret.

 

 

The second application of ANNs in simulation software is to reduce the complexity associated with its configuration. In many numerical simulation methods, many parameters must be fitted to create the optimal configuration for an accurate simulation of the system. Most of these parameters are related to the system's physical geometry and must be known by the user. However, other parameters are only related to the algorithms used in the simulation, and only users with expertise in numerical methods can correctly manage them. This fact strongly reduces the usability of the software. ANNs have been proposed to be included in the simulation packages to find an adequate configuration automatically to overcome this problem [ 5,6].

Artificial Intelligence and Nano-Computing

Nano computing describes computing that uses extremely small or nanoscale devices; indeed, it is a computer that its dimensions are microscopic. Electronic nano computers would operate like the way present-day microcomputers work. The main difference is the physical scale. More and more transistors are squeezed into silicon chips with each passing year.

 

Unsurprisingly, AI is also beneficial concerning the future of nano computing, which is computing conducted through nanoscale mechanisms. There are many ways nano computing devices can execute a function, and these can cover anything from physical operations to computational methods. Due to a great deal of these devices depending on intricate physical systems to allow for intricate computational algorithms, machine learning procedures can be used to generate novel information representations for a broad range of uses.

Artificial Intelligence and High-Performance Computing

In today’s technology world, through the Internet of Things (IoT), our world is more connected than ever, with billions of IoT devices are communicating among each other with the speed of electron and your internet connectivity. These IoT devices are at the edge of collectively generating a tremendous amount of data coming via omni-direction and iCloud for processing. These massive and share volume of data at the level of Big Data (BD) are then transferred to and stored in cloud data centers all over the globe that we are living on. Repository of these data from a historical point of view needs to be compared with fast paced incoming data in order to prevent any means of duplications, in particular, when these data are in the form of structured and unstructured format, thus we can trust them for our data warehousing and data analytics. We need data to increase our information and consequently to be knowledgeable, in order to make the power to make a decisive business decision [7,8].

 

For us to be more resilient and be able to make an efficient Business Resilience System (BRS) around our organization, we need integration of Artificial Intelligence (AI) and computing power system such as High-Performance Computing (HPC) in hand. Insights from the collective data and the way they change second-by-second, we have to rely on HPC processing performance and AI that informs the dashboard screen via its Machine Learning (ML) and Deep Learning (ML). These combined resilience systems allow us to do business, interact with people, and live our daily lives, particularly in the tiny future world of nanoscience and nanotechnology.

 

Generation of these data insights typically occurs by processing the data through Neural Networks [6] to recognize patterns and categorize the information. Training these neural networks is extremely compute-intensive and requires high-performance system architectures. While CPUs are the work horses in the data center, the highly repetitive and parallel nature of artificial intelligence workloads can be more optimally handled with a mix of computing devices.

 

Of course, building a supercomputer and owned them is very costly and takes sums of amount of money, thus your Total Cost of Ownership (TCO) and Return on Investment (ROI) needs to be justified by your business nature. HPC not only takes good sums of money but requires highly specialized experts to operate them as well as using them, and they are only suitable for specialized problems that we encounter in small scale world of nanotechnology to be able to see atom as Professor Richard Feynman said. During his talk in 1959 [1]. As we have seen so far and describing the AI, ML, and DL architecture above, we need to have some holistic understanding of high-performance computing/ computer, which we can put in the following phrase.

 

High Performance Computing most generally refers to the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation in order to solve large problems in science, engineering, or business.

 

 

The point of having a high-performance computer is so that the individual nodes can work together to solve a problem larger than any one computer can easily solve. Just like people, the nodes need to be able to talk to one another to work meaningfully together. With this bare in your mind that “How Artificial Intelligence is Driving Innovation”.

Conclusion

With the progress and thriving of Artificial Intelligence (AI) and Nanotechnology curves are both on ascending slope, it seems these two curves have narrowed to each other to the point that we can recognize a separation span between them. This indicates that they have two complements each other and be the right partner and companion when it comes to both these industries. In today’s technology of artificial intelligence and nanoscience/nanotechnology, it seems their integration of these is inevitable scenario.

 

“The debate about 'converging technologies' is part of a more comprehensive political and social discourse on nanotechnology, biotechnology, information and communications technology (ICT), brain research, artificial intelligence (AI), robotics, and the sciences that deal with these topics. Convergence is an umbrella term for predictions ranging from an increase in synergetic effects to a merging of these fields, and for demands for government funding of research and development where these fields overlap (read more in our Nanowerk Spotlight: The debate about converging technologies)” [5].

 

Whenever we are speaking about nanoengineering or nanotechnology, we can think of science and knowledge of physics, chemistry, and other engineering fields that are dealing with small scale things such as molecule, atom, etc. that number of data that gets collected from any research or application for better information from these fields, we have no choice except to rely on AI and consequently ML and DL. Also, for us, we can think of a new generation of AI that is known as Super Artificial Intelligence (SAI); we are relying on biological science and inspiration to develop some of its most effective paradigms, such as Neural Networks (NNs) or evolutionary algorithms [6].

 

Bridging the link between current nanoscience’s and AI can boost research in these disciplines and provide a new generation of information and communication technologies that will have an enormous impact on our society, probably providing the means to merge technology and biology. Along with AI and Nanotechnology collaboration, they are in need of High-Performance Computing (HPC) that will be both useful to AI from data analytics and data.

 

 

predictive analyses are concerned through AI sub-sets of Machine Learning (ML) and Deep Learning (DL). HPCs are playing a big role in Nano-Computing as well, since with deal with a lot of small-scale size of things in our application of nanotechnology and nanoengineering. Although many efforts have been made in the last years to improve the resolution and the ability to manipulate atoms, the microscope signal's interpretation is still a challenge. The main problem is that most of the tip-sample interactions are not easy to understand and depend on many parameters. These are the kinds of problems where methods from artificial intelligence can be beneficial [5]. As a concluding note, Nanotechnology is the future of small scale that needs Artificial Intelligence to enhance its data analytics for collecting information and knowledge to give us the power to be more decisive [7].

To know more about this article click on 

No comments:

Post a Comment