Autosar layered software architecture epub download

Date published 
 
    Contents
  1. Advances in Software Engineering
  2. Autosar Books
  3. Vehicle networking, training, Training Center
  4. AUTOSAR Compendium, Part 1 Application & RTE 383

The Layered Software Architecture maps the identified modules of the summarizing architectural decisions and discussions of AUTOSAR. Clarification of term AUTOSAR-ECU on slide "94jt1" The Layered Software Architecture describes the software architecture of AUTOSAR. The AUTOSAR software architecture is a layered architecture that has the goal .. rarfaugurlaja.ml

Author:RHETT DAVENPORT
Language:English, Spanish, French
Country:Georgia
Genre:Children & Youth
Pages:754
Published (Last):13.04.2016
ISBN:707-8-16204-751-9
Distribution:Free* [*Register to download]
Uploaded by: HERMINA

68358 downloads 92678 Views 19.78MB ePub Size Report


Autosar Layered Software Architecture Epub Download

PDF Drive is your search engine for PDF files. Automotive Software Engineering und AUTOSAR - rarfaugurlaja.ml AUTOSAR Layered Software Architecture . stack using the AUTOSAR architecture and its corresponding methodology. . using the AUTOSAR system architecture, going through all software layers to a. the AUTOSAR Software Components, AUTOSAR provides a standard description .. A layered architecture has been developed within AUTOSAR to enable a.

This article has been cited by other articles in PMC. Hybrid micro-machines tend to integrate multiple functional modules from different vendors for the best value and performance. However, the lack of plug-and-play solutions leads to tremendous difficulty in system integration. This paper proposes a novel three-layer control architecture for the first time for the system integration of hybrid micro-machines. The interaction of hardware is encapsulated into software components, while the data flow among different components is standardized. The proposed control architecture enhances the flexibility of the computer numerical control CNC system to accommodate a broad range of functional modules. The component design also improves the scalability and maintainability of the whole system. The effectiveness of the proposed control architecture has been successfully verified through the integration of a six-axis hybrid micro-machine. Thus, it provides invaluable guidelines for the development of next-generation CNC systems for hybrid micro-machines. Keywords: hybrid micro-machine, control architecture, system integration, software component, computer numerical control CNC 1. These products are usually made of a wide range of engineering materials and possess complex freeform surfaces with tight tolerance on form accuracy and surface finish. Although conventional stand-alone micro-manufacturing processes, such as micro-milling, laser machining, electrical discharge machining EDM , and so forth, have been the major approaches in manufacturing the aforementioned products, the predictability, producibility, and productivity remain big issues [ 1 ].

Quality requirements on the other hand, define desired attributes of the system, such as performance or reliability [ 17 , pages ]. Software-intensive embedded systems are often used to control and monitor real-world processes so their effectiveness is not only determined by the computational correctness. Instead, quality requirements are often the primary drivers of the entire system architecture [ 18 , pages ].

Functional requirements are usually satisfied by the application code within a partition, so in order to asses the equivalence of partitions with time-sharing and partitions executing in parallel, the focus needs to be shifted to quality requirements. For safety-critical embedded systems, quality requirements usually refer to timing or safety attributes.

In some cases, security attributes play an important role as well. Timing attributes often comprise of deadlines for specific tasks, which have to be met under all circumstances. This leads to the requirement of a fully deterministic system behavior. The system engineer must to be able to know what the system is doing at all times during the entire lifetime of the system. Safety-related attributes on the other hand are concerned with the assurance that no human lives are endangered when the system is running.

These attributes can be addressed with run-time monitoring in addition to independence and redundancy of resources. These patterns provide protection, so that no single error leads to a system failure with human lives at stake. Safety measures also encompass designated safe states to which the system may transition to in case of a fault at run-time.

Finding a safe state is not always trivial. Therefore, the analysis of safe states and possible transitions to these states is often a mandatory part of a safety assessment for a safety-critical embedded system. How does a software partitioning approach of applications help to address timing and safety attributes of a system?

In general terms, partitioning is a resource assignment to an application. In that sense, it is a vital component for the application and the entire system as its applications cannot execute properly if none or insufficient resources are assigned to it. If an application does not get enough processor time by the partition scheduler in the operation system, it may not be able to produce the results in time so that deadlines may be missed and timing attributes are in jeopardy.

A correct resource assignment is also very important to satisfy safety requirements. If an incorrect assignment creates an unintended interference channel between two separate partitions [ 10 ], the isolation between partitions in order to prevent fault propagation is flawed and the integrity of the entire system is endangered.

These quality requirements constitute the many-core migration challenge and lead to the difference between having multiple partitions on a single-core platform with time-sharing and multiple partitions on a many-core platform allowing for true parallelism. With time-sharing, each partition has an exclusive access to all hardware resources during its time slice. The only interruption is to be expected from the operating scheduler when the end of a time slice has been reached.

Therefore, the entire system becomes very deterministic as there are no unforeseen waiting times to be expected due to resource contention at run-time. Every application executes as if it were the only application executing on that platform.

Two applications running in parallel on separate partitions are suddenly competing for resources, which affects the quality attribute of the entire systems. The performance may degrade significantly due to cache thrashing. Concurrent access to resources, such as network interfaces, may lead to nondeterministic waiting times—thus risking the deterministic system behavior. Generally speaking, every processor resource caches, on-chip interconnect, memory controller, … , which is now shared between partitions as a result of a parallel execution, constitutes a potential channel for interferences.

How does this affect the software engineering approach? Without parallel execution, applications executing within a partition could be developed and integrated on the platform with only very little knowledge about other partitions. A partition schedule simply defined when to start which partition. With parallel execution, that is, several partitions potentially running at the same time, their integration becomes more challenging.

Every partition needs to define, which resources are needed for execution. This encompasses not only processor time, but also on-chip interconnect links, caches, memory controllers, interrupt controllers, CAN bus controllers, and so on—essentially every resource which may be shared and concurrently accessed at runtime.

In order to integrate several partitions on a many-core processor platform, these resources requirements need to be aligned, so that no resource is overbooked and resources contention is eliminated. Also safety analysis for a multi-tenant system becomes more challenging. A single fault within the resources used by one partition may then affect the execution of another partition as well. Shutting down the entire control unit may help to reach a safe state, but on the other hand, this will affect all functions deployed on this processes.

In some situations, a fine-grained approach on the partition level may be applicable. If a fault manifests itself in a commonly used resource, all partitions using this resource have to transition to their designated safe state. Generally speaking, on a parallel hardware platform the resource requirements of each partition need to be compared to the requirements of every other partition on the same platform in order to satisfy quality requirements.

This integration effort increases with an increasing number of partitions and an increased use of shared resources. Therefore, it is safe to assume, that the more the embedded domain embraces many-core architectures, the more challenging the integration will become and the more it will pay off to look at efficient ways to address this integration challenge. Traditional Integration Approaches The last section described the major challenges of a multifunction integration in order to tap the full performance potential of a many-core hardware platform.

Essentially, there are three distinct obstacles to be address during the integration: i identification of all shared resources, ii providing sufficient isolation between partitions to prevent fault propagation via shared resources, iii creating and managing an assignment from resources to partitions, so that quality requirements—especially timing—can be satisfied.

The first obstacle can be addressed by a thorough analysis of all applications and the hardware platform. Therefore, it is often necessary to acquire detailed documentation about the processor architecture. With all shared resources being successfully identified, several well-established approaches can be used to ensure isolation.

Rushby [ 13 ] gives a detailed overview about static and dynamic isolation techniques in the avionics and aerospace domain. In addition to the approaches mentioned by Rushby, highly reliable hypervisors can also be used to isolate separate partitions from each other cf.

The third obstacle turns out to be the most challenging one—especially on many-core processors. This is the case as traditional approaches for an integration of software applications has reached its limits when quality requirements, such as timing properties have to be met.

Changes to the resource assignment are realized based on engineering experience and estimations about resource consumption. These real-time simulators help to determine whether the priority assignment to tasks and the chosen scheduling algorithm is sufficient to satisfy all real-time requirements. However, this analysis is of NP-complexity and for a larger task set it may take up a significant amount of time in the engineering process.

For low or noncritical applications, it may be sufficient to focus on the average case. This approach is often combined with an overprovisioning of resources based on the assumption, that if the worst case scenario occurs, these spare resources will help to mitigate the effects.

In this case, every access to system resources, for example, processor or network interfaces, has to be coordinated based on knowledge at design time. Providing spare resources is no longer a viable option, as no deviations from the static operating system schedule are tolerated. The entire system behavior based on the execution patterns of all partitions on a processor has to be defined at design time. This information is often not gathered during the engineering process and therefore seldom available during the integration.

Nevertheless, the process of designing a schedule satisfying all timing requirements of all partitions is very cumbersome, but at the same time a very sensitive matter. Creating a static schedule thus often takes several person months for reasonably complex systems. The complexity of this specific scheduling problem arises not only from the challenge to select tasks so that all deadline are met.

Instead the schedule has to mimic the entire system behavior with regard to timing, for example, context switches take a certain amount of time or writing to buffers after a computation finished may take some additional time.

It also has to incorporate intra- and interapplication relations, which may affect the order of execution. Furthermore, the execution of some applications may be split into several slices offline determined preemption , which affects the amount of context switches and thus the timing. And last but not least, it is often necessary to optimize the schedule according to a certain criteria. This is especially beneficial when hypervisor-based systems with a two-level scheduling approaches for partitions and applications need be addressed.

For these systems, it is a common goal to optimize the schedule for a minimal amount of partition switches to minimize the overhead introduced by the hypervisor. Timing is a cross-cutting concern for the entire system design. Therefore, it needs to be addressed and incorporated throughout the entire engineering process [ 22 ]. And timing requirements are only a part of the quality requirements, which need to be addressed explicitly for a multi-function integration.

The same is true for quality requirements regarding safety. Partitions cannot be freely deployed on the entire system architecture spanning over several processors. Often pairs of partitions need to be mapped on fully dissimilar hardware components to avoid a failure due to undetected design errors in the hardware.

In some cases with lower criticality requirements, it is sufficient to map partitions on redundant hardware nodes. This task can still be accomplished manually up to a problem size of about 20 partitions on 10—15 processors.

However, in complex avionics systems, there will be about processors and about individual functions to be managed in the foreseeable future. A similar trend to increasing number of electronic control units and software components is evident in automotive systems, where more than 80 ECUs in current high-end automobiles are not uncommon.

With many-core processors, the engineers face a significant increase in complexity, which cannot be handled manually in an efficient way. The complexity of the hardware increases as there are significantly more shared resources. At the same time, there will be more and more partitions to be migrated to a single processor.

With more partitions being integrated on the same platform, more development teams—which are most likely spatially spread over several companies—need to be synchronized.

This poses an interesting nontechnical management challenge all by itself. At the same time, the development processes will have to become more agile in order to address the need for a shorter time to market and shorter product evolution cycles.

Generally speaking, the ability to efficiently realize a temporal and spatial assignment of resources to partitions determines the degree to which important quality attributes can be satisfied. In the end, it also determines the level to which software components can be integrated on the same hardware processor platform.

A multi-function integration on a many-core system cannot be done efficiently with the aforementioned traditional approaches. So the research question remains: how to tackle this integration challenge which affects the degree to which the performance of many-core processors can be exploited in software-intensive embedded systems? A few years ago, the development of complex safety critical software—not entire systems—was confronted with a similar challenge.

The application engineers were confronted with a very high system complexity. They had to satisfy challenging functional requirements and of course, there was zero tolerance for defects. This challenge was addressed with a new approach: software development based on formal methods. The underlying idea was based on the realization that the correctness of reasonably complex software could no longer be assured by analyzing all possible execution paths.

The correctness of a system could not be proven by simply observing its behavior. There were simply too many execution paths, so that each path could not be analyzed and tested without significantly exceeding the project budget. Instead of analyzing the exhibited behavior of the software after it has been built, this new approach focuses on earlier stages in the development: requirements engineering, design, and implementation.

The design space of the entire system is restricted by a construction rule-set, so that a significant amount of implementation errors are circumvented. The challenge of a multi-function with mixed-criticality integration on many-core systems could be properly addressed by a new engineering approach based on similar tactics.

The complexity and volatility of the design space simply exceeds the capabilities of traditional integration approaches. So the question is, how to adapt this approach and use it for an efficient construction of a resource assignment in a multi-function integration scenario? Although it applies to the entire life cycle of a software components, it focuses on the programming aspect. How does this principle help for the multi-function integration? The benefits are desirable, but the adoption of a similar approach for the integration is still the subject of applied research.

Up to this point, the following promising strategies have been adapted to the integration challenge based on similar strategies for the prevention and removal of defects in software [ 24 ]. Write Right When quality requirements need to be addressed during the integration, they need to be explicitly expressed and formalized, so that there is no ambiguity about their meaning. Specifications should be distinguished regarding the resources supplied from the hardware and the resource demand from the applications.

Typically, these specifications need to contain: i spatial parameters e. In addition, the hardware architecture of the system with its processing nodes and communication channels have to be modeled as well as the software architecture with all of its applications and their communication intensity.

Where does an application get executed—on which resources—and when does it get executed on these resources—its scheduling pattern. Each constructed mapping should be validated independently whether all spatial and safety-relevant parameters are correctly addressed. In a second step, a schedule gets constructed based on the mapping from the previous step. If a schedule cannot be constructed, feedback should be provided so that the initial mapping can be efficiently adapted accordingly.

Check Here before Going There Every resource assignment should be checked whether its demand exceeds the resource supply. The distinction between additive and exclusive resources and the determination of a resource capacity is a crucial prerequisite for this validation. While exclusive resources can be acquired by only one application, additive resources can be used by more than one application until their capacity has been reached.

However, the differentiation is not as simple as it appears and depends on other parameters as well. For instance, a network interface may typically be used by several applications so it may be categorized as being additive.

On the other hand, are all applications allowed to access the interface at the same time? Do all applications need to have been developed according to same criticality level to avoid an unintended interference channel? Screws: Use a Screwdriver, Not a Hammer For the construction of mappings and schedules, a variety of algorithmic approaches in combination with special heuristics can be applied to tackle the challenge of NP-completeness.

There is no omnipotent tool being applicable for all integration challenges.

Every approach has its advantages in the construction process, but only the combination of all tools significantly boosts the efficiency. As stated at the beginning of this section, the application of correctness by construction for a multi-function integration on many-core processors is still subject to active research.

The next section will give an overview about case studies in which this principle was evaluated in practice. Case Studies and Current Research A model-based approach for the construction of static operating system schedules was used as an initial study of feasibility [ 26 ]. It is based on the underlying assumption that predictable and entirely deterministic real-time behavior for a system—which relies only on periodical tasks with known periods—can be achieved on multi- or many-core processors with static schedules if the following information or at least an estimation is available at design time or configuration time : 1 timing characteristics, for example, worst case execution time WCET , of all applications and their processes on the underlying hardware platform 2 scheduling dependencies between applications and processes 3 off-chip resource usage patterns of all applications, such as busses or other significant external devices.

All conflicts, which may appear at run-time and lead to unpredictable timing, are resolved statically at design time. Our approach aims to optimize the schedule beforehand rather than troubleshooting afterwards. It automatically generates a valid static schedule for high-level timing characteristics of a given set of applications and a given hardware architecture.

In addition to generating solutions, that is, static schedules, for the underlying NP-hard problem, the user is also given the opportunity to adjust the generated schedules to specific needs and purposes. For the sake of simplicity, the input is currently modeled in a textual notation similar to Prolog clauses. The input model and the problem specifications have been developed to suit the needs of a safety critical domain: avionics.

All tasks are executed at run-time with the help of a time-triggered dispatcher. This approach is often overly pessimistic, but still a hard requirement for systems in accordance to avionic safety levels A, B, C, and D. However, there are other scheduling approaches aimed at optimizing the resource utilization despite having a fixed schedule. Another approach to improve resource utilization is based on specifying lower and upper bounds on the execution time.

With the help of precedence constraints, a robust schedule can also be constructed, which avoids possible scheduling anomalies [ 27 ]. Instead it tries to capture the timing behavior of the entire systems and let the system engineer construct the desired behavior.

Therefore, tasks may be preempted by other tasks if necessary. Tolerated jitter can be specified for each application. Relations between tasks executing on different processors can be incorporated into the schedule as well. Furthermore, the modeled system can also contain other resources despite processing time. These resource may be used exclusively, for example, network bandwidth, or cumulatively, for example, an actuator, by an application.

During the construction of a schedule, it is guaranteed that the capacity of these external resources is not exceeded. Although the worst-case run-time for a complete search is exponential, the heuristics allow for affordable run times.

Advances in Software Engineering

For a real world example consisting of three Multicore processors, about 40 applications, a scheduling hyperperiod of ms and a 1 ms timeslot, it requires about ms for generating the model from the input data and about ms for searching a valid solution, that is, a feasible static schedule executed on a virtual machine running Microsoft WindowsXP on a GHz dual-core laptop with GB RAM.

The generated schedule and the mapping onto processors and cores is presented in a graphical user interface see Figure 6 a. For each processing element, all processes that are executed at a given time within the scheduling period are graphically represented by colored bars in a single row.

Relations between processes are indicated by solid lines. Additionally, the usage of external resources is presented in a special Resource Window see Figure 6 b.

Autosar Books

Figure 6 b depicts the resource usage of a fictional off-chip resource. This file is usually transformed to satisfy formatting requirements of specific operating systems. After being approved by the certification authority, it is used in the final configuration of the scheduling component in the operating system. Dynamic scheduling approaches based on fixed or variable priorities represent another common approach for real-time scheduling to allow for exclusive access to the hardware.

Vehicle networking, training, Training Center

Although this approach improves resource utilization, it also hides relations between tasks and scheduling constraints by assigning priorities. Priorities express a relationship to other tasks, but they do not capture the intent of the system engineer. Furthermore, with multi- and many-core processors, tasks with different priorities may still be executed at the same time as there may be plenty of cores available. This may not be a problem for systems with less stringent predictability requirements.

However, it quickly becomes an issue for safety-critical systems in which predictability and determinism matters. Generating static schedules for safety critical Multicore systems is only the first step in improving the development processes and facilitating certification. Validator Due to the importance of an operating system schedule for a safety critical system, special attention has to be given to its verification during certification and official approval—especially, since it was automatically constructed by a software tool.

Usually, there are two options for a system engineer who wishes to use tools to automate development processes: either the software tool has to be qualified, which is a very intricate and costly task, or the result has to be verified with another, dissimilar verification tool see [ 14 ], pages 59— The verification tool does not need to be qualified, as its output is not directly used in the final product. Here the second option was chosen, so that a special validator for static Multicore schedules is currently in development.

If all properties were successfully checked, that is, no never claim turned out to be true, the model of the static schedule satisfies all formalized software requirements and the construction was correctly done. Mapper Previously, the challenge of mapping from software components, that is, partitions and processes, onto processors and their cores was introduced.

Currently, this is often done manually, which may be sufficient for systems with only a few processors. However, it is certainly not a viable approach for complex systems comprising of up to processors, especially, since mapping for safety critical systems does not solely depend on achieving scheduleability, but also on other safety criteria, such as redundancy, dissimilarity and independence. For instance, the mapping of an application comprising of two redundant partitions should not allow these identical partitions to be mapped on the same core or on the same processor.

This would clearly lead to a violation of safety requirements, because the underlying hardware usually does not offer enough redundancy for the designated assurance level. It is also common for safety critical applications to comprise of partitions which implement the same functionality, but in a dissimilar fashion.

Depending on the criticality level, these partitions may need to be mapped onto dissimilar processors and configured to use dissimilar communication channels to prevent design errors. Redundancy and dissimilarity have to be expressed in relation to certain aspects of the underlying hardware. For instance, partition and partition have to be mapped on processors with redundant power supplies and dissimilar communication channels.

By acknowledging the fact that the design of safety-critical embedded systems has to follow complex safety restrictions and regulations, but at the same time it also needs to minimize costs and reduce space, weight and power requirements, a mapper is currently in development, which constructs feasible mappings that satisfy safety constraints see Figure 7. If such a schedule cannot be found, the mapper is requested to modify the initial mapping.

AUTOSAR Compendium, Part 1 Application & RTE 383

The mapper may then choose to remove partitions from overutilized processors, so that a schedule can be constructed and all mapping-related safety requirements are met. Figure 7: Construction of mappings from applications onto processors and cores.

Conclusions and Discussion Many-core processors can be used to significantly improve the value of software-intensive systems. A small power envelope in conjunction with unprecedented performance levels—at least in the embedded domain—pave the way for cyber physical systems. Having many, but relatively simple cores requires the exploitation of parallelism on all levels, especially on the application layer and the thread layer.

This approach constitutes a significant prerequisite in tapping the full potential of a massively parallel processor. At the same time, this leads to a multi-tenant situation, when multiple functions from different vendors are to be integrated on the same chip.

Trends and standardization efforts in key domains, that is, avionics and automotive, already address these challenges arising from the functional requirements.

The integrated modular avionics in conjunction with ARINC in avionics and AUTOSAR in the automotive domain provide a standardized abstraction layer, so that software components can be developed relatively independent from the specifics of the underlying processor hardware.

While this helps address the functional correctness of integrating several applications on a many-core processor, quality requirements—especially timing and safety—are not sufficiently addressed. These become especially apparent during the integration. Suddenly applications are competing for resources, which may get congested leading to unpredictable waiting times and jeopardizing the predictable system behavior.

According to this provacative book, , American soldiers did not die for the honorable cause of ending slavery but for the dubious agenda of sacrificing the independence of the states to the supremacy of the federal government, which has been tightening its vise grip on our republic to this very day.

You will discover a side of Lincoln that you were probably never taught in school a side that calls into question the very myths that surround him and helps explain the true origins of a bloody, and perhaps, unnecessary war. Thomas J. Williams, from the foreword "A peacefully negotiated secession was the best way to handle all the problems facing Americans in A war of coercion was Lincoln's creation.

It sometimes takes a century or more to bring an important historical event into perspective. This study does just that and leaves the reader asking, 'Why didn't we know this before? Calhoun Papers "From the Hardcover edition. With her relationships with Batman and her father strained, Batgirl faces one of Batman's most ruthless villains, The Ventriloquist, alone.

This simple little book from a great spiritual giant attends to what we human beings are most inclined to forget: preparing for and engaging in prayer.

Related files


Copyright © 2019 rarfaugurlaja.ml. All rights reserved.