diff options
author | Tejun Heo <tj@kernel.org> | 2012-03-08 10:53:58 -0800 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2012-03-20 12:45:37 +0100 |
commit | 997a026c80c3cc05f82e589aced1f0011c17d376 (patch) | |
tree | 905fe49970f8549663e1e70e77dd04811fd14c9c /block/blk-cgroup.h | |
parent | 5fe224d2d5fbf8f020b30d0ba69fed7856923752 (diff) |
blkcg: simplify stat reset
blkiocg_reset_stats() implements stat reset for blkio.reset_stats
cgroupfs file. This feature is very unconventional and something
which shouldn't have been merged. It's only useful when there's only
one user or tool looking at the stats. As soon as multiple users
and/or tools are involved, it becomes useless as resetting disrupts
other usages. There are very good reasons why all other stats expect
readers to read values at the start and end of a period and subtract
to determine delta over the period.
The implementation is rather complex - some fields shouldn't be
cleared and it saves some fields, resets whole and restores for some
reason. Reset of percpu stats is also racy. The comment points to
64bit store atomicity for the reason but even without that stores for
zero can simply race with other CPUs doing RMW and get clobbered.
Simplify reset by
* Clear selectively instead of resetting and restoring.
* Grouping debug stat fields to be reset and using memset() over them.
* Not caring about stats_lock.
* Using memset() to reset percpu stats.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Vivek Goyal <vgoyal@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/blk-cgroup.h')
-rw-r--r-- | block/blk-cgroup.h | 14 |
1 files changed, 12 insertions, 2 deletions
diff --git a/block/blk-cgroup.h b/block/blk-cgroup.h index 6c8e3e345426..1fa3c5e8d87f 100644 --- a/block/blk-cgroup.h +++ b/block/blk-cgroup.h @@ -131,21 +131,31 @@ struct blkio_group_stats { /* Total time spent waiting for it to be assigned a timeslice. */ uint64_t group_wait_time; - uint64_t start_group_wait_time; /* Time spent idling for this blkio_group */ uint64_t idle_time; - uint64_t start_idle_time; /* * Total time when we have requests queued and do not contain the * current active queue. */ uint64_t empty_time; + + /* fields after this shouldn't be cleared on stat reset */ + uint64_t start_group_wait_time; + uint64_t start_idle_time; uint64_t start_empty_time; uint16_t flags; #endif }; +#ifdef CONFIG_DEBUG_BLK_CGROUP +#define BLKG_STATS_DEBUG_CLEAR_START \ + offsetof(struct blkio_group_stats, unaccounted_time) +#define BLKG_STATS_DEBUG_CLEAR_SIZE \ + (offsetof(struct blkio_group_stats, start_group_wait_time) - \ + BLKG_STATS_DEBUG_CLEAR_START) +#endif + /* Per cpu blkio group stats */ struct blkio_group_stats_cpu { uint64_t sectors; |