Comments

Published in Ramaprasad, H., & Mueller, F. (2006). Tightening the bounds on feasible preemption points. 27th IEEE International Real-Time Systems Symposium, 2006. RTSS '06. 212 - 224. doi: 10.1109/RTSS.2006.49 ©2006 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

Abstract

Caches have become invaluable for higher-end architectures to hide, in part, the increasing gap between processor speed and memory access times. While the effect of caches on timing predictability of single real-time tasks has been the focus of much research, bounding the overhead of cache warm-ups after preemptions remains a challenging problem, particularly for data caches. This paper makes multiple contributions. 1) We bound the penalty of cache interference for real-time tasks by providing accurate predictions of data cache behavior across preemptions, including instruction cache and pipeline effects. We show that, when considering cache preemption, the critical instant does not occur upon simultaneous release of all tasks. 2) We develop analysis methods to calculate upper bounds on the number of possible preemption points for each job of a task. To make these bounds tight, we consider the entire range between the best-case and worst-case execution times (BCET and WCET) of higher priority jobs. The effects of cache interference are integrated into the WCET calculations by using a feedback mechanism to interact with a static timing analyzer. Significant improvements in tightening bounds of up to an order of magnitude over two prior methods and up to half a magnitude over a third prior method are obtained by experiments for (a) the number of preemptions, (b) the WCET and (c) the response time of a task. Overall, this work contributes by calculating the worst-case preemption delay under consideration of data caches.

Share

COinS