Misplaced Pages

LiteOS

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

A real-time operating system ( RTOS ) is an operating system (OS) for real-time computing applications that processes data and events that have critically defined time constraints. An RTOS is distinct from a time-sharing operating system, such as Unix, which manages the sharing of system resources with a scheduler, data buffers, or fixed task prioritization in multitasking or multiprogramming environments. All operations must verifiably complete within given time and resource constraints or else fail safe . Real-time operating systems are event-driven and preemptive , meaning the OS can monitor the relevant priority of competing tasks, and make changes to the task priority. Event-driven systems switch between tasks based on their priorities, while time-sharing systems switch the task based on clock interrupts .

#314685

41-398: Huawei Lite OS is a discontinued lightweight real-time operating system (RTOS) developed by Huawei . It is an open source , POSIX compliant operating system for Internet of things (IoT) devices, released under a three-clause BSD license . Microcontrollers of different architectures such as ARM (M0/3/4/7, A7/17/53, ARM9/11), x86, and RISC-V are supported by the project. Huawei LiteOS

82-419: A deadlock , two or more tasks lock mutex without timeouts and then wait forever for the other task's mutex, creating a cyclic dependency. The simplest deadlock scenario occurs when two tasks alternately lock two mutex, but in the opposite order. Deadlock is prevented by careful design. The other approach to resource sharing is for tasks to send messages in an organized message passing scheme. In this paradigm,

123-498: A "soft" real-time operating system (soft RTOS); a late answer is a wrong answer in a hard RTOS while a late answer is acceptable in a soft RTOS. The chief design goal is not high throughput , but rather a guarantee of a soft or hard performance category. An RTOS that can usually or generally meet a deadline is a soft real-time OS, but if it can meet a deadline deterministically it is a hard real-time OS. An RTOS has an advanced algorithm for scheduling . Scheduler flexibility enables

164-415: A few tasks but occasionally contains more, then the list should be sorted by priority, so that finding the highest priority task to run does not require traversing the list. Instead, inserting a task requires walking the list. During this search, preemption should not be inhibited. Long critical sections should be divided into smaller pieces. If an interrupt occurs that makes a high priority task ready during

205-468: A flag or sending a message. A scheduler often provides the ability to unblock a task from interrupt handler context. An OS maintains catalogues of objects it manages such as threads, mutexes, memory, and so on. Updates to this catalogue must be strictly controlled. For this reason, it can be problematic when an interrupt handler calls an OS function while the application is in the act of also doing so. The OS function called from an interrupt handler could find

246-417: A minimum, interrupt handlers are typically kept as short as possible. The interrupt handler defers all interaction with the hardware if possible; typically all that is necessary is to acknowledge or disable the interrupt (so that it won't occur again when the interrupt handler returns) and notify a task that work needs to be done. This can be done by unblocking a driver task through releasing a semaphore, setting

287-517: A mutex, but the lower priority task is not given CPU time to finish its work. A typical solution is to have the task that owns a mutex 'inherit' the priority of the highest waiting task. But this simple approach gets more complex when there are multiple levels of waiting: task A waits for a mutex locked by task B , which waits for a mutex locked by task C . Handling multiple levels of inheritance causes other code to run in high priority context and thus can cause starvation of medium-priority threads. In

328-472: A node either after or before a given node: We also need a function to insert a node at the beginning of a possibly empty list: A symmetric function inserts at the end: Removal of a node is easier than insertion, but requires special handling if the node to be removed is the firstNode or lastNode : One subtle consequence of the above procedure is that deleting the last node of a list sets both firstNode and lastNode to null , and so it handles removing

369-399: A possibly empty list requires a special function: To insert at the beginning we simply "insertAfter(list.lastNode, node)". Finally, removing a node must deal with the case where the list empties: As in doubly linked lists, "removeAfter" and "removeBefore" can be implemented with "remove(list, node.prev)" and "remove(list, node.next)". An asymmetric doubly linked list is somewhere between

410-525: A search of the list for a node with specific data value. Any node of a doubly linked list, once obtained, can be used to begin a new traversal of the list, in either direction (towards beginning or end), from the given node. The link fields of a doubly linked list node are often called next and previous or forward and backward . The references stored in the link fields are usually implemented as pointers , but (as in any linked data structure), they may also be address offsets or indices into an array where

451-428: A task is working on a low-priority message and ignores a higher-priority message (or a message originating indirectly from a high priority task) in its incoming message queue. Protocol deadlocks can occur when two or more tasks wait for each other to send response messages. Since an interrupt handler blocks the highest priority task from running, and since real-time operating systems are designed to keep thread latency to

SECTION 10

#1732782923315

492-405: A wider, computer-system orchestration of process priorities, but a real-time OS is more frequently dedicated to a narrow set of applications. Key factors in a real-time OS are minimal interrupt latency and minimal thread switching latency ; a real-time OS is valued more for how quickly or how predictably it can respond than for the amount of work it can perform in a given period of time. An RTOS

533-428: Is an operating system in which the time taken to process an input stimulus is less than the time lapsed until the next input stimulus of the same type. The most common designs are: Time sharing designs switch tasks more often than strictly needed, but give smoother multitasking , giving the illusion that a process or user has sole use of a machine. Early CPU designs needed many cycles to switch tasks during which

574-449: Is better to use mechanisms also available on general-purpose operating systems, such as a mutex and OS-supervised interprocess messaging. Such mechanisms involve system calls, and usually invoke the OS's dispatcher code on exit, so they typically take hundreds of CPU instructions to execute, while masking interrupts may take as few as one instruction on some processors. A (non-recursive) mutex

615-403: Is either locked or unlocked. When a task has locked the mutex, all other tasks must wait for the mutex to be unlocked by its owner - the original thread. A task may set a timeout on its wait for a mutex. There are several well-known problems with mutex based designs such as priority inversion and deadlocks . In priority inversion a high priority task waits because a low priority task has

656-409: Is enough free memory. Secondly, speed of allocation is important. A standard memory allocation scheme scans a linked list of indeterminate length to find a suitable free memory block, which is unacceptable in a RTOS since memory allocation has to occur within a certain amount of time. Because mechanical disks have much longer and more unpredictable response times, swapping to disk files is not used for

697-467: Is frowned upon. Whenever possible, all required memory allocation is specified statically at compile time. Another reason to avoid dynamic memory allocation is memory fragmentation. With frequent allocation and releasing of small chunks of memory, a situation may occur where available memory is divided into several sections and the RTOS cannot allocate a large enough continuous block of memory, although there

738-565: Is part of Huawei's '1+8+N' Internet of Things solution, and has been featured in a number of open source development kits and industry offerings. Smartwatches by Huawei and its former Honor brand run LiteOS. LiteOS variants of kernels has since been incorporated into the IoT-oriented HarmonyOS with open source OpenHarmony . On 20 May 2015 , at the Huawei Network Conference, Huawei proposed

779-498: Is some node in a non-empty list, this code traverses through that list starting with someNode (any node will do): Forwards Backwards Notice the postponing of the test to the end of the loop. This is important for the case where the list contains only the single node someNode . This simple function inserts a node into a doubly linked circularly linked list after a given element: To do an "insertBefore", we can simply "insertAfter(node.prev, newNode)". Inserting an element in

820-525: Is the lowest overhead method to prevent simultaneous access to a shared resource. While interrupts are masked and the current task does not make a blocking OS call, the current task has exclusive use of the CPU since no other task or interrupt can take control, so the critical section is protected. When the task exits its critical section, it must unmask interrupts; pending interrupts, if any, will then execute. Temporarily masking interrupts should only be done when

861-526: The System Management Mode on x86 compatible hardware can take a lot of time before it returns control to the operating system. Memory allocation is more critical in a real-time operating system than in other operating systems. First, for stability there cannot be memory leaks (memory that is allocated but not freed after use). The device should work indefinitely, without ever needing a reboot. For this reason, dynamic memory allocation

SECTION 20

#1732782923315

902-530: The '1+2+1' Internet of Things solution and release the IoT operating system named Huawei LiteOS. It has been reported development of the real-time operating system goes back as far as 2012. Real-time operating system A key characteristic of an RTOS is the level of its consistency concerning the amount of time it takes to accept and complete an application's task ; the variability is " jitter ". A "hard" real-time operating system (hard RTOS) has less jitter than

943-411: The CPU could do nothing else useful. Because switching took so long, early OSes tried to minimize wasting CPU time by avoiding unnecessary task switching. In typical designs, a task has three states: Most tasks are blocked or ready most of the time because generally only one task can run at a time per CPU core . The number of items in the ready queue can vary greatly, depending on the number of tasks

984-400: The OS related work to a separate handler. This handler runs at a higher priority than any thread but lower than the interrupt handlers. The advantage of this architecture is that it adds very few cycles to interrupt latency. As a result, OSes which implement the segmented architecture are more predictable and can deal with higher interrupt rates compared to the unified architecture. Similarly,

1025-410: The head of the list: It allows the first node to modify the firstNode link easily. As long as a node is in a list, its previous link is never null. To insert a node before another, we change the link that pointed to the old node, using the prev link; then set the new node's next link to point to the old node, and change that node's prev link accordingly. To remove a node, we simply modify

1066-527: The highest priority to jobs with the lowest demand on the computer, so there is no way to ensure that a time-critical job will have access to enough resources. Multitasking systems must manage sharing data and hardware resources among multiple tasks. It is usually unsafe for two tasks to access the same specific data or hardware resource simultaneously. There are three common approaches to resolve this problem: General-purpose operating systems usually do not allow user programs to mask (disable) interrupts , because

1107-441: The highest-priority ready task will take 5 to 30 instructions. In advanced systems, real-time tasks share computing resources with many non-real-time tasks, and the ready list can be arbitrarily long. In such systems, a scheduler ready list implemented as a linked list would be inadequate. Some commonly used RTOS scheduling algorithms are: A multitasking operating system like Unix is poor at real-time tasks. The scheduler gives

1148-432: The insertion of a low priority task, that high priority task can be inserted and run immediately before the low priority task is inserted. The critical response time, sometimes called the flyback time, is the time it takes to queue a new ready task and restore the state of the highest priority task to running. In a well-designed RTOS, readying a new task will take 3 to 20 instructions per ready-queue entry, and restoration of

1189-424: The last node from a one-element list correctly. Notice that we also don't need separate "removeBefore" or "removeAfter" methods, because in a doubly linked list we can just use "remove(node.prev)" or "remove(node.next)" where these are valid. This also assumes that the node being removed is guaranteed to exist. If the node does not exist in this list, then some error handling would be required. Assuming that someNode

1230-426: The list to find the previous node, so that its link can be modified. The first and last nodes of a doubly linked list for all practical applications are immediately accessible (i.e., accessible without traversal, and usually called head and tail ) and therefore allow traversal of the list from the beginning or end of the list, respectively: e.g., traversing the list from beginning to end, or from end to beginning, in

1271-472: The longest path through the critical section is shorter than the desired maximum interrupt latency . Typically this method of protection is used only when the critical section is just a few instructions and contains no loops. This method is ideal for protecting hardware bit-mapped registers when the bits are controlled by different tasks. When the shared resource must be reserved without blocking all other tasks (such as waiting for Flash memory to be written), it

LiteOS - Misplaced Pages Continue

1312-530: The nodes live. Consider the following basic algorithms written in Ada: Traversal of a doubly linked list can be in either direction. In fact, the direction of traversal can change many times, if desired. Traversal is often called iteration , but that choice of terminology is unfortunate, for iteration has well-defined semantics (e.g., in mathematics) which are not analogous to traversal . Forwards Backwards These symmetric functions insert

1353-496: The object database to be in an inconsistent state because of the application's update. There are two major approaches to deal with this problem: the unified architecture and the segmented architecture. RTOSs implementing the unified architecture solve the problem by simply disabling interrupts while the internal catalogue is updated. The downside of this is that interrupt latency increases, potentially losing interrupts. The segmented architecture does not make direct OS calls but delegates

1394-425: The previous and to the next node in the sequence of nodes) and one data field. The beginning and ending nodes' previous and next links, respectively, point to some kind of terminator, typically a sentinel node or null , to facilitate traversal of the list. If there is only one sentinel node, then the list is circularly linked via the sentinel node. It can be conceptualized as two singly linked lists formed from

1435-447: The resource is managed directly by only one task. When another task wants to interrogate or manipulate the resource, it sends a message to the managing task. Although their real-time behavior is less crisp than semaphore systems, simple message-based systems avoid most protocol deadlock hazards, and are generally better-behaved than semaphore systems. However, problems like those of semaphores are possible. Priority inversion can occur when

1476-453: The same data items, but in opposite sequential orders. The two node links allow traversal of the list in either direction. While adding or removing a node in a doubly linked list requires changing more links than the same operations on a singly linked list, the operations are simpler and potentially more efficient (for nodes other than first nodes) because there is no need to keep track of the previous node during traversal or no need to traverse

1517-415: The same reasons as RAM allocation discussed above. The simple fixed-size-blocks algorithm works quite well for simple embedded systems because of its low overhead. Doubly linked list In computer science , a doubly linked list is a linked data structure that consists of a set of sequentially linked records called nodes . Each node contains three fields : two link fields ( references to

1558-426: The singly linked list and the regular doubly linked list. It shares some features with the singly linked list (single-direction traversal) and others from the doubly linked list (ease of modification) It is a list where each node's previous link points not to the previous node, but to the link to itself. While this makes little difference between nodes (it just points to an offset within the previous node), it changes

1599-413: The system needs to perform and the type of scheduler that the system uses. On simpler non-preemptive but still multitasking systems, a task has to give up its time on the CPU to other tasks, which can cause the ready queue to have a greater number of overall tasks in the ready to be executed state ( resource starvation ). Usually, the data structure of the ready list in the scheduler is designed to minimize

1640-547: The user program could control the CPU for as long as it is made to. Some modern CPUs do not allow user mode code to disable interrupts as such control is considered a key operating system resource. Many embedded systems and RTOSs, however, allow the application itself to run in kernel mode for greater system call efficiency and also to permit the application to have greater control of the operating environment without requiring OS intervention. On single-processor systems, an application running in kernel mode and masking interrupts

1681-431: The worst-case length of time spent in the scheduler's critical section, during which preemption is inhibited, and, in some cases, all interrupts are disabled, but the choice of data structure depends also on the maximum number of tasks that can be on the ready list. If there are never more than a few tasks on the ready list, then a doubly linked list of ready tasks is likely optimal. If the ready list usually contains only

LiteOS - Misplaced Pages Continue

#314685