Diverse Populations and Health Care
March 8, 2023Deep Learning in a Virtual Environment for Self-Driving Cars
nBy [Name]
nInstitution
nInstructor
nCourse
nDate
n
nIntroduction
nDue to advancement in technology, software has been developed to emulate the traits of human drivers behaviours. The self-driving cars have become popular in the recent past as most firms are manufacturing-related software and hardware technologies aimed to achieve full self-sufficient driving capacities without human intervention (Tian, Pei, Jana and Ray 2017, p.1). Deep learning is a subdivision of artificial intelligence that applies neural networks intended to facilitate machine learning. The deep learning and neural networks have been positively utilized in different control activities (Kato et al 2017, p.2).
nDeep learning networks
nBechtel, McEllhiney and Yun (2017) argued that deep neural networks are essential workload for self-driving cars. For instance, Tesla Model S utilized a particular chip referred to as MobileEye EyeQ that applied a deep neural network (DNN) real-time vision-based hurdle avoidance and detection. Therefore, they can be utilized as controllers of self-driving cars. It is anticipated that more artificial intelligence based DNN can be applied for future self-driving vehicles (Bechtel, McEllhiney and Yun 2017, p.1).
nResearch by Tian, Pei, Jana and Ray (2017) suggested that deep-learning system facilitates learning using road images and the steering angles created by a human driving. The main utilization of deep learning networks within the automobile realm involves a complex computer perception and vision (Tian, Pei, Jana and Ray 2017, p.3). Moreover, visual tasks such as blind-spot monitoring, recognition of road signs, pedestrian detection, and land detections are managed efficiently with deep learning. Kato et al (2017) argued that the major difference between deep learning and machine learning networks is based on the magnitude in which the networks can self-reliantly learn. Precisely, the machine learning utilizes structures from input especially the training data and makes predictions depending on a few or single strata of nodes. For instance, a deep neural network (DNN) comprises numerous hidden layers that introduce new features and surpasses human coding capabilities (Kato et al 2017, p.3). For this reason, deep learning is more commanding and robust for complicated computing tasks such as recognition of an object.
nFigure 1 Deep learning in car used to in pedestrian detection (Kato et al 2017)
n
nExplanation: Figure 1 illustrates how deep learning in self-driving cars can be used to identify pedestrians on the roads.
nA study conducted by Shalev-Shwartz, Shammah and Shashua (2016) highlighted that noteworthy is the enhancement of deep learning applying a convolutional neural network (CNN). In this regard, the input is the entire image and hence embeds feature extraction. However, in some situations, such as steering the car along fixed routes and in high-resolution charted town, less complex learning algorithms are adequate to deal with such tasks. However, in complicated scenarios such as numerous changing routes or unknown destinations, deep learning network is the more appropriate alternative (Shalev-Shwartz, Shammah and Shashua 2016, p.5).
nFigure 2 Deep learning networks in pedestrian crossing (Shalev-Shwartz, Shammah and Shashua 2016)
n
nExplanation: Figure 2 demonstrates the use of deep learning networks in autonomous cars in pedestrian crossing. It senses the road users.
nBojarski et al (2017) examined the approaches in which a deep learning can be specifically be utilized in autonomous cars. The researchers pointed out two main ways, with their benefits and shortcomings. Firstly, semantic abstraction involves breaking down the challenge of self-driving into multiple components. Semantic abstraction comprises algorithms that are dedicated only on a single portion of the task (Bojarski et al 2017, p.1). For instance, the scholars argued that a single component could be concentrated on pedestrian detection while another is recognizing the lane marking. A third algorithm could be identifying items outside the lane. Ultimately, these mechanisms are integrated together into a principal network that enables decision making while driving. On the contrary, a network can be created that senses and categorizes different classes or even engages in semantic separation (Shalev-Shwartz, Shammah and Shashua 2016, p.5).
nAdvantages and disadvantages
nMoreover, a study by Tian, Pei, Jana and Ray (2017) highlighted that the advantages of semantic abstraction are that it offers minimal tolerance for errors. It also has the capability to isolate errors more quickly and has the capacity to administer unpredictable circumstances better. However, in terms of shortcomings, the deep learning system needs complicated programming and enormous pre-work (Tian, Pei, Jana and Ray 2017, p.6).
nSecondly, “disrupting” learning network involves an end-to-end approach. Specifically, this network enables the car to learn how to drive itself without any assistance. Nonetheless, it utilizes a large amount of data prepared by humans. According to Bechtel, McEllhiney and Yun (2017), this strategy has several shortcomings. For instance, it needs a large amount of training data. In addition, the car should have the capability to be tuned and trained properly. The scholars noted that it is advantageous because this learning network has quite promising for the prospective intelligent cars (Bechtel, McEllhiney and Yun 2017, p.6).
nFigure 3 Deep Learning in Intelligent cars (Bechtel, McEllhiney and Yun 2017)
n
nExplanation: Figure 3 display how deep learning systems uses computing software in the car linked with deep neural networks to facilitate the functions of the car
nChallenges and shortcomings of deep learning networks
nVarious studies have attempted to determine the limitations of deep learning networks when used in self-driving cars. Firstly, Kato et al (2017) argued that since deep learning demands such a huge amount of processing power, a robust brain is required to deal with big data capacities and computing necessities. Presently, the most appropriate technology is referred to as graphical processing unit (GPU) because it is formulated to deal with heavy image computing responsibilities. Some companies such as Intel and NVidia are planning to deliver deep learning networks with superior brains for the smart car market. nevertheless, the challenge remains to manufacture a less expensive GPU that functions within the consumption of energy and other features such as management of heat that are needed for a market-ready car (Kato et al 2017, p.3).
nA research by Bojarski et al (2016) contended an end-to-end learning network demands a large amount of training information to facilitate easier prediction in as many driving situations as possible to meet the minimal safety needs. The researchers noted that more than a billion kilometres of teaching or drilling data for convincing road circumstances are needed to draw conclusions about the safety of the car. Furthermore, the information should be sufficiently diverse for it to be beneficial. Therefore, lack of adequate amount of training data poses a challenge to the use of deep learning networks (Bojarski et al 2016, p.6).
nDeep learning networks (DNN) for autonomous cars suffer from the challenge of safety. Based on research findings by Schmidhuber (2015), deep neural networks are limited because they are quite unstable due to adversarial perturbation. The study discovered that negligible changes in images of the camera such as cropping, resizing, and the modification of lighting conditions might lead to the misclassification of the image from the system. Similarly, the safety of the cars is not guaranteed because the security verification and assurance methods for deep learning networks are inadequately studied. The researchers further noted that there is no mechanism of regulating the safety element owing to the fast race of emerging technology (Schmidhuber 2015, p.7). A noticeable case of safety disaster has occurred in the past where Tesla self-driving car was involved in an accident. The vehicles sensors were affected by the solar and the structure was unable to recognize the truck approaching it from the right, contributing to the crash. For this reason, more studies should be conducted before conclusions can be made on the safety levels of self-driving cars (Bojarski et al 2016, p.4).
nKeras and TensorFlow backend
nSome of the most common deep learning mechanisms include CNTK, Caffe, Theano, and Tensorflow among others. Moreover, Keras is a high quality deep learning network with the capacity to function on top of CNTK, Theano, and TensorFlow. Studies indicate that the common deep learning networks are prepared in a manner with much functionality (Tian, Pei, Jana and Ray 2017, p.5).
nSeminal work by Shalev-Shwartz, Shammah and Shashua (2016) illustrated that Keras offers a modular and simple API to generate and teach neural networks without exposing the majority of the complex facts under the hood. In this respect, it contributes to the easier deep learning experience. The use of Keras requires Tensorflow and Theano as backend archives including other libraries that enable visualization and utilization of data. Keras has three layers, which serves as the elements of a neural network. They help in the processing of input information and generate diverse outputs based on the kind of layer. The layers, which are interlinked to nodes, later utilize the outputs (Shalev-Shwartz, Shammah and Shashua 2016, p.7). Some of the key layers include the dense layers, activation layer, and dropout layers. Studies by Kato et al (2017) noted that dense layers involve a connection of output and input nodes while the activation layer comprises activation tasks like TANH and ReLU. Finally, the dropout layer is utilized for regularization in the course of the training (Kato et al 2017, p.5).
nAccording to Bechtel, McEllhiney, and Yun (2017), TensorFlow provides an excellent amount of records for learning and installations, which are intended to assist beginners, comprehend some of the conceptual elements of neural networks and acquiring TensorFlow system. It also has the capacity to provide limited sub graph calculation in a process referred to as Model Parallelization. It permits for dispersed training. The researcher noted that Keras reinforces TensorFlow, which implies that Keras has both Theano and TensorFlow backends. Furthermore, TensorFlow is quite slow as compared to Torch and Theano (Bechtel, McEllhiney and Yun 2017, p.7).
nBojarski et al (2017) highlighted that Theano is useful because it helps in the computation of gradient when calculating the back-propagation fault by developing an analytical manifestation. It also assists in the elimination of error accumulation in the successive computations applying the chain rule. Theano and other frameworks such as Torch compete in the performance speed. The computing convolutions utilizing this framework create a performance improvement over computing the complexity the conventional way (Bojarski et al 2017, p.9).
nConclusion
nDeep learning technology is a branch of artificial intelligence (AI) which enables a car to detect surrounding objects. The DNN is beneficial since it facilitates the seamless function of the self-driving car. The technology provides a computing system, which enables the self-driving car to sense external environment such as lanes and incoming vehicle. The special intelligence is very encouraging since it can become the major algorithm that is applied to allow the car move independently. The deep learning networks uses framework such as Keras and TensorFlow backend. However, this system has some severe shortcoming that requires enhancement. For instance, its safety levels are unsatisfactory hence the need to encourage more innovation. Additionally, the deep learning is limited in terms of the amount of data required for training.
n
nReferences
nBechtel, M.G., McEllhiney, E. and Yun, H., 2017. DeepPicar: A Low-cost Deep Neural Network-based Autonomous Car. arXiv preprint arXiv:1712.08644.
nBojarski, M., Del Testa, D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel, L.D., Monfort, M., Muller, U., Zhang, J. and Zhang, X., 2016. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316.
nBojarski, M., Yeres, P., Choromanska, A., Choromanski, K., Firner, B., Jackel, L. and Muller, U., 2017. Explaining how a deep neural network trained with end-to-end learning steers a car. arXiv preprint arXiv:1704.07911.
nKato, N., Fadlullah, Z.M., Mao, B., Tang, F., Akashi, O., Inoue, T. and Mizutani, K., 2017. The deep learning vision for heterogeneous network traffic control: proposal, challenges, and future perspective. IEEE wireless communications, 24(3), pp.146-153.
nSchmidhuber, J., 2015. Deep learning in neural networks: An overview. Neural networks, 61, pp.85-117.Shalev-Shwartz, S., Shammah, S. and Shashua, A., 2016. Safe, multi-agent, reinforcement learning for autonomous driving. arXiv preprint arXiv:1610.03295.
nTian, Y., Pei, K., Jana, S. and Ray, B., 2017. DeepTest: Automated testing of deep-neural-network-driven autonomous cars. arXiv preprint arXiv:1708.08559.