java1.7HashMap源码解析

java1.7HashMap源码解析,第1张

类图

使用数组加链表实现,HashMap中每对(key,value)被封装成一个Entry对象(HashMap的静态内部类)。

static class Entry implements Map.Entry {
    final K key;
    V value;
    Entry next;
    int hash;

    /**
     * Creates new entry.
     */
    Entry(int h, K k, V v, Entry n) {
        value = v;
        next = n;
        key = k;
        hash = h;
    }
    .........
}

        通过计算每个entry对象key的hash值来定位entry在数组中的位置,因为会出现hash碰撞问题(不同的key的hash可能相同),所以使用单向链表来解决(数组每个元素位置都可能是一个hash桶,就是说都可能是一个单向链表),发生碰撞后会遍历链表里的每个元素如果key值相等就会覆盖,如果遍历到最后也没有找到key值相同的,那么这个新的entry就会插入到链表的顶端,然后再将这个新entry赋值到数组的这个下角标上。头插法:选择头插法是根据时间局部性原理,最近插入的最有可能被使用,所以使用头插。

初始容量为16(1<<4:1向左位移4位),最大容量为2的30次幂。

/**
 * The default initial capacity - MUST be a power of two.
 */
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
/**
 * The maximum capacity, used if a higher value is implicitly specified
 * by either of the constructors with arguments.
 * MUST be a power of two <= 1<<30.
 */
static final int MAXIMUM_CAPACITY = 1 << 30;

        默认扩容因子是0.75,就是说每次数组容量达到length*0.75时,触发扩容,每次扩容一倍,数组的长度必须是2的幂次方。

/**
 * The load factor used when none specified in constructor.
 */
static final float DEFAULT_LOAD_FACTOR = 0.75f;

        扩容阈值,默认情况下是数组容量乘以加载因子。

/**
 * The next size value at which to resize (capacity * load factor).
 * @serial
 */
// If table == EMPTY_TABLE then this is the initial capacity at which the
// table will be created when inflated.
int threshold;

        主要构造函数:

public HashMap(int initialCapacity, float loadFactor) {
    if (initialCapacity < 0)
        throw new IllegalArgumentException("Illegal initial capacity: " +
                                           initialCapacity);
    if (initialCapacity > MAXIMUM_CAPACITY)
        initialCapacity = MAXIMUM_CAPACITY;
    if (loadFactor <= 0 || Float.isNaN(loadFactor))
        throw new IllegalArgumentException("Illegal load factor: " +
                                           loadFactor);

    this.loadFactor = loadFactor;
    threshold = initialCapacity;
    init();
}

        变量名为table的Entry数组是存放数据的数组,当创建一个HashMap,会初始化一个空的Entry数组,将空数组赋值给table。

/**
 * An empty table instance to share when the table is not inflated.
 */
static final Entry[] EMPTY_TABLE = {};

/**
 * The table, resized as necessary. Length MUST Always be a power of two.
 */
transient Entry[] table = (Entry[]) EMPTY_TABLE;

重要方法:put方法:

public V put(K key, V value) {
    //判断是否是空数组,
    if (table == EMPTY_TABLE) {
        //膨胀数组,根据扩容阈值将空数组膨胀
        inflateTable(threshold);
    }
    if (key == null)
        return putForNullKey(value);
    int hash = hash(key);
    int i = indexFor(hash, table.length);
    for (Entry e = table[i]; e != null; e = e.next) {
        Object k;
        if (e.hash == hash && ((k = e.key) == key || key.equals(k))) {
            V oldValue = e.value;
            e.value = value;
            e.recordAccess(this);
            return oldValue;
        }
    }

    modCount++;
    addEntry(hash, key, value, i);
    return null;
}

/**
 * Inflates the table.
 */
private void inflateTable(int toSize) {
    // Find a power of 2 >= toSize
    int capacity = roundUpToPowerOf2(toSize);

    threshold = (int) Math.min(capacity * loadFactor, MAXIMUM_CAPACITY + 1);
    table = new Entry[capacity];
    initHashSeedAsNeeded(capacity);
}

        使用Integer.highestOneBit方法计算出初始化数组容量,为什么number要减1呢?因为如果不减1,就直接左移一位(翻倍,乘2),那么如果number是16,32等这样2的幂次方数的话返回的就会是翻倍(乘2)的结果,就是说如果number是16,Integer.hightestOneBit返回的就会是32,但是预期结果应该是16.

private static int roundUpToPowerOf2(int number) {
    // assert number >= 0 : "number must be non-negative";
    return number >= MAXIMUM_CAPACITY
            ? MAXIMUM_CAPACITY
            : (number > 1) ? Integer.highestOneBit((number - 1) << 1) : 1;
}

        计算数组初始化容量的核心方法使用到了Integer.highestOneBit 方法,这个方法是取一个整数的小于等于自身并最接近自身的2的幂次方数。2的幂次方数的特点是:将数转为二进制方式显示,这个数最高位且只有一位数为1。

1  ----- 0000 0001
2  ----- 0000 0010
4  ----- 0000 0100
8  ----- 0000 1000
16 ----- 0001 0000
32 ----- 0010 0000
/**
 * Returns an {@code int} value with at most a single one-bit, in the
 * position of the highest-order ("leftmost") one-bit in the specified
 * {@code int} value.  Returns zero if the specified value has no
 * one-bits in its two's complement binary representation, that is, if it
 * is equal to zero.
 *
 * @param i the value whose highest one bit is to be computed
 * @return an {@code int} value with a single one-bit, in the position
 *     of the highest-order one-bit in the specified value, or zero if
 *     the specified value is itself equal to zero.
 * @since 1.5
 */
public static int highestOneBit(int i) {
    // HD, Figure 3-1
    i |= (i >>  1);
    i |= (i >>  2);
    i |= (i >>  4);
    i |= (i >>  8);
    i |= (i >> 16);
    return i - (i >>> 1);
}

        举例说明:int i=17,求17比他小且最大的2的幂次方数。经过5次位移加或运算将i变成了从i本身值最高位之后所有位数都是1的一个数值,这样i在右移一位就变成了最高位为0,其他位为1的数,再用i减去右移一位的数,正好就是最高位为1,其他位为0的数,这样就符合2的幂次方数的特性。那么为什么要位移这么多次呢?1+2+4+8+16=31正好是int类型正整数的上限范围2的31次幂,为了确保把原数最高位变成0.

17   ----- 0001 0001
>>1  ----- 0000 1000
|    ----- 0001 1000
>>2  ----- 0000 0110
|    ----- 0001 1110
>>4  ----- 0000 0001
|    ----- 0001 1111
>>8  ----- 0000 0000
|    ----- 0001 1111
..........

        做了这么多事情为了什么初始化数组容量一定要是2的幂次方数呢?是因为在计算元素在数组中的下标时,用的是hash值&(数组长度-1),数组长度为2的幂次方数的话就可以保证(数组长度-1)它的二进制码高位为0,其他位都是1。与运算是:相同位都为1才为1,所以hash值&(数组长度-1)得出的结果肯定不会数组越界。

/**
 * Returns index for hash code h.
 */
static int indexFor(int h, int length) {
    // assert Integer.bitCount(length) == 1 : "length must be a non-zero power of 2";
    return h & (length-1);
}

        hash方法使用了这么多的异或是为了让hash值更加的散列,使用了这么多的位移,是为了让key的hash值的二进制高位也能参加到计算数组下标的与运算中。

/**
 * Retrieve object hash code and applies a supplemental hash function to the
 * result hash, which defends against poor quality hash functions.  This is
 * critical because HashMap uses power-of-two length hash tables, that
 * otherwise encounter collisions for hashCodes that do not differ
 * in lower bits. Note: Null keys always map to hash 0, thus index 0.
 */
final int hash(Object k) {
    int h = hashSeed;
    if (0 != h && k instanceof String) {
        return sun.misc.Hashing.stringHash32((String) k);
    }

    h ^= k.hashCode();

    // This function ensures that hashCodes that differ only by
    // constant multiples at each bit position have a bounded
    // number of collisions (approximately 8 at default load factor).
    h ^= (h >>> 20) ^ (h >>> 12);
    return h ^ (h >>> 7) ^ (h >>> 4);
}

        关于扩容,在向map中添加元素时,如果没有相同key元素就会添加一个元素到map中,这样就有可能触发扩容,

/**
 * Adds a new entry with the specified key, value and hash code to
 * the specified bucket.  It is the responsibility of this
 * method to resize the table if appropriate.
 *
 * Subclass overrides this to alter the behavior of put method.
 */
void addEntry(int hash, K key, V value, int bucketIndex) {
    /*
    *当前map元素个数大于等于扩容因子,并且数组下标i的位置不是个空链表
    *空链表这个条件在1.8中就没有了
    */
    if ((size >= threshold) && (null != table[bucketIndex])) {
        //扩容数组为之前的两倍
        resize(2 * table.length);
        hash = (null != key) ? hash(key) : 0;
        //如果扩容后,从新计算数组下标
        bucketIndex = indexFor(hash, table.length);
    }

    createEntry(hash, key, value, bucketIndex);
}

        扩容方法,在不需要重算key的hash值的情况下,每次扩容,每个元素的下标不是老数组下标原位置就是原位置加上老数组的length(hash值不变,下标与运算时新数组比老数组的长度二进制高了一位)。

/**
 * Rehashes the contents of this map into a new array with a
 * larger capacity.  This method is called automatically when the
 * number of keys in this map reaches its threshold.
 *
 * If current capacity is MAXIMUM_CAPACITY, this method does not
 * resize the map, but sets threshold to Integer.MAX_VALUE.
 * This has the effect of preventing future calls.
 *
 * @param newCapacity the new capacity, MUST be a power of two;
 *        must be greater than current capacity unless current
 *        capacity is MAXIMUM_CAPACITY (in which case value
 *        is irrelevant).
 */
void resize(int newCapacity) {
    Entry[] oldTable = table;
    int oldCapacity = oldTable.length;
    if (oldCapacity == MAXIMUM_CAPACITY) {
        threshold = Integer.MAX_VALUE;
        return;
    }

    Entry[] newTable = new Entry[newCapacity];
    transfer(newTable, initHashSeedAsNeeded(newCapacity));
    table = newTable;
    threshold = (int)Math.min(newCapacity * loadFactor, MAXIMUM_CAPACITY + 1);
}

transfer转移旧table里的元素到新table中,这个方法里会出现线程不安全问题(链表死循环)。

/**
 * Transfers all entries from current table to newTable.
 */
void transfer(Entry[] newTable, boolean rehash) {
    int newCapacity = newTable.length;
    for (Entry e : table) {
        while(null != e) {
            Entry next = e.next;
            if (rehash) {
                e.hash = null == e.key ? 0 : hash(e.key);
            }
            int i = indexFor(e.hash, newCapacity);
            e.next = newTable[i];
            newTable[i] = e;
            e = next;
        }
    }
}

        关于modCount:我们知道 java.util.HashMap 不是线程安全的,因此如果在使用迭代器的过程中有其他线程修改了map,那么将抛出ConcurrentModificationException,这就是所谓fail-fast策略。这一策略在源码中的实现是通过 modCount 域,modCount 顾名思义就是修改次数,对HashMap 内容的修改都将增加这个值,那么在迭代器初始化过程中会将这个值赋给迭代器的 expectedModCount。在迭代过程中,判断 modCount 跟 expectedModCount 是否相等,如果不相等就表示已经有其他线程修改了 。JDK5和JDK6中变量modCount确实声明为volatile。但在JDK7和JDK8中,已经没有这样声明 。

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/langs/740598.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-04-28
下一篇 2022-04-28

发表评论

登录后才能评论

评论列表(0条)

保存