diff options
author | Ying Xue <ying.xue@windriver.com> | 2014-05-05 08:56:09 +0800 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2014-05-05 17:26:43 -0400 |
commit | 5356f3d7d48af72eb2a14b643d5563f068c44fe0 (patch) | |
tree | e0ad9803f7fbf71b467ea9bb3e24f64dc673939a /net/tipc/name_distr.c | |
parent | 5b579e212fc77b6731e2767a0658ae7b64a67a10 (diff) |
tipc: always use tipc_node_lock() to hold node lock
Although we obtain node lock with tipc_node_lock() in most time, there
are still places where we directly use native spin lock interface
to grab node lock. But as we will do more jobs in the future when node
lock is released, we should ensure that tipc_node_lock() is always
called when node lock is taken.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Erik Hugne <erik.hugne@ericsson.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/tipc/name_distr.c')
-rw-r--r-- | net/tipc/name_distr.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/net/tipc/name_distr.c b/net/tipc/name_distr.c index 974a73f3d876..8465263246c3 100644 --- a/net/tipc/name_distr.c +++ b/net/tipc/name_distr.c @@ -135,18 +135,18 @@ void named_cluster_distribute(struct sk_buff *buf) rcu_read_lock(); list_for_each_entry_rcu(n_ptr, &tipc_node_list, list) { - spin_lock_bh(&n_ptr->lock); + tipc_node_lock(n_ptr); l_ptr = n_ptr->active_links[n_ptr->addr & 1]; if (l_ptr) { buf_copy = skb_copy(buf, GFP_ATOMIC); if (!buf_copy) { - spin_unlock_bh(&n_ptr->lock); + tipc_node_unlock(n_ptr); break; } msg_set_destnode(buf_msg(buf_copy), n_ptr->addr); __tipc_link_xmit(l_ptr, buf_copy); } - spin_unlock_bh(&n_ptr->lock); + tipc_node_unlock(n_ptr); } rcu_read_unlock(); |