[firefly] [PATCH] kernel:sched: renamed max_vruntime parameter

Daniel Baluta daniel.baluta at gmail.com
Tue Mar 12 17:13:45 EET 2013


On Tue, Mar 12, 2013 at 5:05 PM, Andrei Epure <epure.andrei at gmail.com> wrote:
> The min_vruntime variable actually stores the maximum value.
> I added the comment for code readability.
>
> Signed-off-by: Andrei Epure <epure.andrei at gmail.com>
> ---
>  kernel/sched/fair.c |   13 +++++++------
>  1 file changed, 7 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 3220639..a065d0f 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -431,13 +431,13 @@ void account_cfs_rq_runtime(struct cfs_rq *cfs_rq, unsigned long delta_exec);
>   * Scheduling class tree data structure manipulation methods:
>   */
>
> -static inline u64 max_vruntime(u64 min_vruntime, u64 vruntime)
> +static inline u64 max_vruntime(u64 max_vruntime, u64 vruntime)
>  {
> -       s64 delta = (s64)(vruntime - min_vruntime);
> +       s64 delta = (s64)(vruntime - max_vruntime);
>         if (delta > 0)
> -               min_vruntime = vruntime;
> +               max_vruntime = vruntime;
>
> -       return min_vruntime;
> +       return max_vruntime;
>  }
Use git blame -c kernel/sched/fair.c to check the commit that introduced this.
Perhaps it's as intended.
>
>  static inline u64 min_vruntime(u64 min_vruntime, u64 vruntime)
> @@ -473,6 +473,7 @@ static void update_min_vruntime(struct cfs_rq *cfs_rq)
>                         vruntime = min_vruntime(vruntime, se->vruntime);
>         }
>
> +       /* ensure we never gain time by being placed backwards. */
>         cfs_rq->min_vruntime = max_vruntime(cfs_rq->min_vruntime, vruntime);
>  #ifndef CONFIG_64BIT
>         smp_wmb();
> @@ -3576,7 +3577,7 @@ preempt:
>          * point, either of which can * drop the rq lock.
>          *
>          * Also, during early boot the idle thread is in the fair class,
> -        * for obvious reasons its a bad idea to schedule back to it.
> +        * for obvious reasons it's a bad idea to schedule back to it.
>          */
>         if (unlikely(!se->on_rq || curr == rq->idle))
>                 return;
> @@ -3785,7 +3786,7 @@ static bool yield_to_task_fair(struct rq *rq, struct task_struct *p, bool preemp
>   *
>   * w_i,j,k is the weight of the j-th runnable task in the k-th cgroup on cpu i.
>   *
> - * The big problem is S_k, its a global sum needed to compute a local (W_i)
> + * The big problem is S_k, it's a global sum needed to compute a local (W_i)
>   * property.
>   *
>   * [XXX write more on how we solve this.. _after_ merging pjt's patches that
> --

The two spelling fixes should be subject for another patch.

thanks,
Daniel.


More information about the firefly mailing list