We actually need one byte less (next_mb_id is exclusive, first_mb_id is
inclusive). While at it, compact the code.
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Link: https://lore.kernel.org/r/20201112133815.13332-3-david@redhat.com
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
*/
static int virtio_mem_mb_state_prepare_next_mb(struct virtio_mem *vm)
{
- unsigned long old_bytes = vm->next_mb_id - vm->first_mb_id + 1;
- unsigned long new_bytes = vm->next_mb_id - vm->first_mb_id + 2;
- int old_pages = PFN_UP(old_bytes);
- int new_pages = PFN_UP(new_bytes);
+ int old_pages = PFN_UP(vm->next_mb_id - vm->first_mb_id);
+ int new_pages = PFN_UP(vm->next_mb_id - vm->first_mb_id + 1);
uint8_t *new_mb_state;
if (vm->mb_state && old_pages == new_pages)