Misplaced Pages

Navlab

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

Navlab is a series of autonomous and semi-autonomous vehicles developed by teams from The Robotics Institute at the School of Computer Science , Carnegie Mellon University . Later models were produced under a new department created specifically for the research called "The Carnegie Mellon University Navigation Laboratory". Navlab 5 notably steered itself almost all the way from Pittsburgh to San Diego.

#360639

25-827: Research on computer controlled vehicles began at Carnegie Mellon in 1984 as part of the DARPA Strategic Computing Initiative and production of the first vehicle, Navlab 1, began in 1986. The vehicles in the Navlab series have been designed for varying purposes, "... off-road scouting; automated highways; run-off-road collision prevention; and driver assistance for maneuvering in crowded city environments. Our current work involves pedestrian detection, surround sensing, and short range sensing for vehicle control." Several types of vehicles have been developed, including "... robot cars, vans, SUVs, and buses." The institute has made vehicles with

50-456: A Neural Network) was developed in 1988. Detailed information is found in Dean A. Pomerleau's PhD thesis (1992). It was an early demonstration of representation learning, sensor fusion, and data augmentation. ALVINN was a 3-layer fully connected feedforward network trained by backpropagation, with 1217-29-46 neurons. It had 3 types of inputs: The output layer consisted of 46 units: By inspecting

75-565: A proof-of-concept trip, dubbed "No Hands Across America", with the system navigating for all but 50 of the 2850 miles, averaging over 60 MPH. In 2007, Navlab 5 was added to the Class of 2008 inductees of the Robot Hall of Fame . Navlabs 6 and 7 were both built with Pontiac Bonnevilles . Navlab 8 was built with an Oldsmobile Silhouette van. Navlabs 9 and 10 were both built out of Houston transit buses. The ALVINN (An Autonomous Land Vehicle in

100-414: Is a method used to address imbalanced datasets in machine learning. In such datasets, the number of samples in different classes varies significantly, leading to biased model performance. For example, in a medical diagnosis dataset with 90 samples representing healthy individuals and only 10 samples representing individuals with a particular disease, traditional algorithms may struggle to accurately classify

125-472: Is a statistical technique which allows maximum likelihood estimation from incomplete data. Data augmentation has important applications in Bayesian analysis , and the technique is widely used in machine learning to reduce overfitting when training machine learning models, achieved by training models on several slightly-modified copies of existing data. Synthetic Minority Over-sampling Technique (SMOTE)

150-407: Is formed which is to x 3 {\displaystyle x_{3}} what x 2 {\displaystyle x_{2}} is to x 1 {\displaystyle x_{1}} . A transformation is applied to x 1 {\displaystyle x_{1}} to make it more similar to x 2 {\displaystyle x_{2}} ,

175-457: Is of paramount importance for machine learning classification, particularly for biological data, which tend to be high dimensional and scarce. The applications of robotic control and augmentation in disabled and able-bodied subjects still rely mainly on subject-specific analyses. Data scarcity is notable in signal processing problems such as for Parkinson's Disease Electromyography signals, which are difficult to source - Zanini, et al. noted that it

200-464: Is possible to use a generative adversarial network (in particular, a DCGAN) to perform style transfer in order to generate synthetic electromyographic signals that corresponded to those exhibited by sufferers of Parkinson's Disease. The approaches are also important in electroencephalography (brainwaves). Wang, et al. explored the idea of using deep convolutional neural networks for EEG-Based Emotion Recognition, results show that emotion recognition

225-528: Is shifted accordingly. In this way, each example is augmented to 11 examples. Carnegie Mellon Too Many Requests If you report this error to the Wikimedia System Administrators, please include the details below. Request from 172.68.168.226 via cp1108 cp1108, Varnish XID 220427121 Upstream caches: cp1108 int Error: 429, Too Many Requests at Thu, 28 Nov 2024 07:57:47 GMT Data augmentation Data augmentation

250-400: The Navlab 1. They noticed that because a human driver never strays far from the path, the network would never be trained on what action to take if it ever finds itself straying far from the path. To deal with this problem, they applied data augmentation , where each real image is shifted to the left by 5 different amounts and to the right by 5 different amounts, and the real human steering angle

275-399: The data was unclear In live experiments, it ran on Navlab 1, with a video camera and a laser rangefinder. It could drive it at 0.5 m/s along a 400-meter wooded path under a variety of weathers: snowy, rainy, sunny and cloudy. This was competitive with traditional computer-vision-based algorithms at the time. Later, they applied on-line imitation learning with real data by a person driving

SECTION 10

#1732780667361

300-476: The designations Navlab 1 through 11. The vehicles were mainly semi-autonomous, though some were fully autonomous and required no human input. Navlab 1 was built in 1986 using a Chevrolet panel van . The van had 5 racks of computer hardware, including 3 Sun workstations, video hardware and GPS receiver, and a Warp supercomputer. The computer had 100 MFLOP/sec, the size of a fridge, and a portable 5 kW generator . The vehicle suffered from software limitations and

325-549: The field of deep learning, more specifically on the ability of generative models to create artificial data which is then introduced during the classification model training process. In 2018, Luo et al. observed that useful EEG signal data could be generated by Conditional Wasserstein Generative Adversarial Networks (GANs) which was then introduced to the training set in a classical train-test learning framework. The authors found classification performance

350-431: The learning ability of several models which otherwise performed relatively poorly. Tsinganos et al. studied the approaches of magnitude warping, wavelet decomposition, and synthetic surface EMG models (generative approaches) for hand gesture recognition, finding classification performance increases of up to +16% when augmented data was introduced during training. More recently, data augmentation studies have begun to focus on

375-408: The minority class. SMOTE rebalances the dataset by generating synthetic samples for the minority class. For instance, if there are 100 samples in the majority class and 10 in the minority class, SMOTE can create synthetic samples by randomly selecting a minority class sample and its nearest neighbors, then generating new samples along the line segments joining these neighbors. This process helps increase

400-456: The network weights, Pomerleau noticed that the feedback unit learned to measure the relative lightness of the road areas vs the non-road areas. ALVINN was trained by supervised learning on a dataset of 1200 simulated road images paired with corresponding range finder data. These images encompassed diverse road curvatures, retinal orientations, lighting conditions, and noise levels. The simulated images took 6 hours of Sun-4 CPU time. The network

425-452: The representation of the minority class, improving model performance. When convolutional neural networks grew larger in mid-1990s, there was a lack of data to use, especially considering that some part of the overall dataset should be spared for later testing. It was proposed to perturb existing data with affine transformations to create new examples with the same labels, which were complemented by so-called elastic distortions in 2003, and

450-553: The same transformation is then applied to x 3 {\displaystyle x_{3}} which generates x s y n t h e t i c {\displaystyle x_{synthetic}} . This approach was shown to improve performance of a Linear Discriminant Analysis classifier on three different datasets. Current research shows great impact can be derived from relatively simple techniques. For example, Freer observed that introducing noise into gathered data to form additional data points improved

475-518: The spatial properties of images to simulate different perspectives, orientations, and scales. Common techniques include: Color space transformations modify the color properties of images, addressing variations in lighting, color saturation , and contrast. Techniques include: Injecting noise into images simulates real-world imperfections, teaching models to ignore irrelevant variations. Techniques involve: Residual or block bootstrap can be used for time series augmentation. Synthetic data augmentation

500-502: The technique was widely used as of 2010s. Data augmentation can enhance CNN performance and acts as a countermeasure against CNN profiling attacks. Data augmentation has become fundamental in image classification, enriching training dataset diversity to improve model generalization and performance. The evolution of this practice has introduced a broad spectrum of techniques, including geometric transformations, color space adjustments, and noise injection. Geometric transformations alter

525-503: Was improved when data augmentation was used. A common approach is to generate synthetic signals by re-arranging components of real data. Lotte proposed a method of "Artificial Trial Generation Based on Analogy" where three data examples x 1 , x 2 , x 3 {\displaystyle x_{1},x_{2},x_{3}} provide examples and an artificial x s y n t h e t i c {\displaystyle x_{synthetic}}

SECTION 20

#1732780667361

550-455: Was improved when such techniques were introduced. The prediction of mechanical signals based on data augmentation brings a new generation of technological innovations, such as new energy dispatch, 5G communication field, and robotics control engineering. In 2022, Yang et al. integrate constraints, optimization and control into a deep network framework based on data augmentation and data pruning with spatio-temporal data correlation, and improve

575-437: Was limited with a top speed of 6 mph (9.7 km/h). When Navlab 2 was driven on-road it could achieve as high as 70 mph (110 km/h) Navlab 1 and 2 were semi-autonomous and used "... steering wheel and drive shaft encoders and an expensive inertial navigation system for position estimation." Navlab 5 used a 1990 Pontiac Trans Sport minivan. In July 1995, the team took it from Pittsburgh to San Diego on

600-437: Was not fully functional until the late 80s, when it achieved its top speed of 20 mph (32 km/h). Navlab 2 was built in 1990 using a US Army HMMWV . Computer power was uprated for this new vehicle with three Sparc 10 computers, "for high level data processing", and two 68000-based computers "used for low level control". The Hummer was capable of driving both off- or on-road. When driving over rough terrain, its speed

625-436: Was trained for 40 epochs using backpropagation on Warp (taking 45 minutes). The desired output for each training example was a Gaussian distribution of activation across the steering output units, centered on the unit representing the correct steering angle. At the end of training, the network achieved 90% accuracy in predicting the correct steering angle within two units of the true value on unseen simulated road images. Because

#360639