Poison


  • 首页

  • 归档

  • 标签

  • 搜索
close
Poison

Alter the Program’s Execution Flow

发表于 2021-12-24

在调试程序时,我们可以通过丢弃当前栈帧、使用断点表达式、强制返回当前方法、抛出异常等操作来提高我们的调试效率。

References

Alter the program’s execution flow | IntelliJ IDEA

Poison

Dubbo #9490

发表于 2021-12-23

今天帮同事看了个问题,该问题不复杂,只是表现出来没有头绪,在此简单记录。首先发现该问题是一个空指针异常,即调用方通过 Dubbo 调用消费端提供的方法时调用方的异常仅有一个空指针异常,没有其他有价值的信息。调用方的代码简化如下:

1
2
3
4
5
6
7
@Reference
private TestService testService;

@Test
public void testNPE() {
testService.queryByQuery(new TestQuery(22L));
}

第 6 行将抛出空指针异常,异常栈帧如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
java.lang.NullPointerException
at me.tianshuang.TestServiceImpl.queryByQuery(TestServiceImpl.java:64)
at org.apache.dubbo.common.bytecode.Wrapper187.invokeMethod(Wrapper187.java)
at org.apache.dubbo.rpc.proxy.javassist.JavassistProxyFactory$1.doInvoke(JavassistProxyFactory.java:47)
at org.apache.dubbo.rpc.proxy.AbstractProxyInvoker.invoke(AbstractProxyInvoker.java:84)
at org.apache.dubbo.config.invoker.DelegateProviderMetaDataInvoker.invoke(DelegateProviderMetaDataInvoker.java:56)
at org.apache.dubbo.rpc.protocol.InvokerWrapper.invoke(InvokerWrapper.java:56)
at me.tianshuang.dubbo.filter.DubboExceptionFilter.invoke(DubboExceptionFilter.java:22)
at org.apache.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:81)
at me.tianshuang.dubbo.filter.DubboMethodFilter.invoke(DubboMethodFilter.java:38)
at org.apache.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:81)
at me.tianshuang.dubbo.filter.DubboProviderFilter.invoke(DubboProviderFilter.java:56)
at org.apache.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:81)
at com.alibaba.csp.sentinel.adapter.dubbo.SentinelDubboProviderFilter.invoke(SentinelDubboProviderFilter.java:77)
at org.apache.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:81)
at org.apache.dubbo.monitor.support.MonitorFilter.invoke(MonitorFilter.java:89)
at org.apache.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:81)
at org.apache.dubbo.rpc.filter.TimeoutFilter.invoke(TimeoutFilter.java:44)
at org.apache.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:81)
at org.apache.dubbo.rpc.protocol.dubbo.filter.TraceFilter.invoke(TraceFilter.java:81)
at org.apache.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:81)
at org.apache.dubbo.rpc.filter.ContextFilter.invoke(ContextFilter.java:102)
at org.apache.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:81)
at org.apache.dubbo.rpc.filter.GenericFilter.invoke(GenericFilter.java:149)
at org.apache.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:81)
at org.apache.dubbo.rpc.filter.ClassLoaderFilter.invoke(ClassLoaderFilter.java:38)
at org.apache.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:81)
at org.apache.dubbo.rpc.filter.EchoFilter.invoke(EchoFilter.java:41)
at org.apache.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:81)
at org.apache.dubbo.rpc.protocol.dubbo.DubboProtocol$1.reply(DubboProtocol.java:150)
at org.apache.dubbo.remoting.exchange.support.header.HeaderExchangeHandler.handleRequest(HeaderExchangeHandler.java:100)
at org.apache.dubbo.remoting.exchange.support.header.HeaderExchangeHandler.received(HeaderExchangeHandler.java:175)
at org.apache.dubbo.remoting.transport.DecodeHandler.received(DecodeHandler.java:51)
at org.apache.dubbo.remoting.transport.dispatcher.ChannelEventRunnable.run(ChannelEventRunnable.java:57)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
阅读全文 »
Poison

ConcurrentLruCache

发表于 2021-12-21

今天看到 Spring 中的 ConcurrentLruCache 实现,本文简要记录。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
/**
* Simple LRU (Least Recently Used) cache, bounded by a specified cache limit.
*
* <p>This implementation is backed by a {@code ConcurrentHashMap} for storing
* the cached values and a {@code ConcurrentLinkedDeque} for ordering the keys
* and choosing the least recently used key when the cache is at full capacity.
*
* @author Brian Clozel
* @author Juergen Hoeller
* @since 5.3
* @param <K> the type of the key used for cache retrieval
* @param <V> the type of the cached values
* @see #get
*/
public class ConcurrentLruCache<K, V> {

private final int sizeLimit;

private final Function<K, V> generator;

private final ConcurrentHashMap<K, V> cache = new ConcurrentHashMap<>();

// 当前实现为队列尾部为最近访问的节点
private final ConcurrentLinkedDeque<K> queue = new ConcurrentLinkedDeque<>();

private final ReadWriteLock lock = new ReentrantReadWriteLock();

// 对 size 的读取并不都被锁保护,所以使用了 volatile 修饰保证可见性
private volatile int size;


/**
* Create a new cache instance with the given limit and generator function.
* @param sizeLimit the maximum number of entries in the cache
* (0 indicates no caching, always generating a new value)
* @param generator a function to generate a new value for a given key
*/
public ConcurrentLruCache(int sizeLimit, Function<K, V> generator) {
Assert.isTrue(sizeLimit >= 0, "Cache size limit must not be negative");
Assert.notNull(generator, "Generator function must not be null");
this.sizeLimit = sizeLimit;
this.generator = generator;
}


/**
* Retrieve an entry from the cache, potentially triggering generation
* of the value.
* @param key the key to retrieve the entry for
* @return the cached or newly generated value
*/
public V get(K key) {
// 如果设置的 sizeLimit 为 0,说明不启用缓存,直接每次调用 generator 的 apply 函数生成值并返回
if (this.sizeLimit == 0) {
return this.generator.apply(key);
}

V cached = this.cache.get(key);
if (cached != null) { // 如果存在 key 对应的缓存
if (this.size < this.sizeLimit) { // 如果当前缓存区中缓存项个数不足 sizeLimit, 说明无需移动 queue 中的节点,直接返回即可
return cached;
}
this.lock.readLock().lock(); // 执行到此处说明缓存区中元素个数已经大于等于 sizeLimit, 那么此时需要移动 queue 中的节点
try {
if (this.queue.removeLastOccurrence(key)) { // 从队列尾部向前找当前访问的 key 并移除
this.queue.offer(key); // 将当前访问的 key 加入至队列尾部,注意此操作在读锁中实现,即存在并发调用,所以 queue 采用了线程安全的实现:ConcurrentLinkedDeque
}
return cached;
}
finally {
this.lock.readLock().unlock();
}
}

this.lock.writeLock().lock(); // 执行到此处说明缓存区中不存在 key 对应的缓存
try {
// 进入了写锁的临界区,此处进行双重检查,避免多个线程并发调用 get 然后串行进入该临界区导致重复创建了缓存
// Retrying in case of concurrent reads on the same key
cached = this.cache.get(key);
if (cached != null) {
if (this.queue.removeLastOccurrence(key)) {
this.queue.offer(key);
}
return cached;
}
// 执行到此处,即将调用 generator 的 apply 函数生成缓存
// Generate value first, to prevent size inconsistency
V value = this.generator.apply(key);
if (this.size == this.sizeLimit) { // 如果缓存区中的缓存项个数已经达到预设的个数限制,则移除队列头部元素
K leastUsed = this.queue.poll();
if (leastUsed != null) {
this.cache.remove(leastUsed);
}
}
this.queue.offer(key); // 添加至队列尾部,即最近的元素在队尾
this.cache.put(key, value);
this.size = this.cache.size();
return value;
}
finally {
this.lock.writeLock().unlock();
}
}

}

最为核心的方法即为 get 方法,其实现思路在上方增加了注释。关键点在于读锁那部分代码是允许多线程执行的,而我们又需要维护 queue 中节点的顺序,所以选用了线程安全的队列实现:ConcurrentLinkedDeque。唯一不理解的就是对 cache 的修改操作都被写锁保护,在当前实现中 cache 由 ConcurrentHashMap 实现,是否可以更换为 HashMap 实现呢?

关于该实现,有用户提交 issue 反馈链表中的 O(n) 扫描效率低下,建议替换为 ConcurrentLinkedHashMap,详情可参考:Revisit ConcurrentLruCache implementation · Issue #26320。

References

spring-framework/ConcurrentLruCache.java at v5.3.14 · spring-projects/spring-framework · GitHub
Why does ConcurrentLruCache in Spring still use thread-safe maps and queues instead of ordinary maps and queues even when read-write lock are used? - Stack Overflow

Poison

ConsistentHashLoadBalance

发表于 2021-12-20

之前项目中曾使用一致性 Hash 路由请求至指定的缓存节点,关于 Dubbo 的一致性 Hash 实现可以参考 Dubbo 一致性Hash负载均衡实现剖析,源码参见 ConsistentHashLoadBalance.java at dubbo-3.0.0。其 hash 值与 Invoker 的映射关系采用 TreeMap 作为底层的数据结构,其根据当前调用查询调用者的源码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
private final TreeMap<Long, Invoker<T>> virtualInvokers;

public Invoker<T> select(Invocation invocation) {
String key = toKey(invocation.getArguments());
byte[] digest = Bytes.getMD5(key);
return selectForKey(hash(digest, 0));
}

private String toKey(Object[] args) {
StringBuilder buf = new StringBuilder();
for (int i : argumentIndex) {
if (i >= 0 && i < args.length) {
buf.append(args[i]);
}
}
return buf.toString();
}

private Invoker<T> selectForKey(long hash) {
Map.Entry<Long, Invoker<T>> entry = virtualInvokers.ceilingEntry(hash);
if (entry == null) {
entry = virtualInvokers.firstEntry();
}
return entry.getValue();
}
阅读全文 »
Poison

Collections.SynchronizedList

发表于 2021-12-15

当多线程并发访问一个 List 的实例时,可以使用 Collections.synchronizedList(List<T>) 将 List 的实例进行包装,其内部调用的构造函数位于 Collections.java at jdk8-b120:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
static class SynchronizedCollection<E> implements Collection<E>, Serializable {

final Collection<E> c; // Backing Collection
final Object mutex; // Object on which to synchronize

SynchronizedCollection(Collection<E> c) {
this.c = Objects.requireNonNull(c);
mutex = this;
}

SynchronizedCollection(Collection<E> c, Object mutex) {
this.c = Objects.requireNonNull(c);
this.mutex = Objects.requireNonNull(mutex);
}

public Iterator<E> iterator() {
return c.iterator(); // Must be manually synched by user!
}

public boolean add(E e) {
synchronized (mutex) {return c.add(e);}
}

// ...
}

可以看出内部使用了 mutex 作为锁的对象以保证线程安全,而为什么不直接在方法上加上 synchronized 以实现相同的语义呢?是因为第二个构造函数允许用户传入锁的对象,比如用户需要使用单个锁来同步多个集合时,以实现对多集合多线程的并发访问。

在 Collections.synchronizedList(List<T>) 方法的 Java Doc 中还提到,使用迭代器时需要在外部加锁。

It is imperative that the user manually synchronize on the returned list when iterating over it:

1
2
3
4
5
6
7
List list = Collections.synchronizedList(new ArrayList());
...
synchronized (list) {
Iterator i = list.iterator(); // Must be in synchronized block
while (i.hasNext())
foo(i.next());
}

Failure to follow this advice may result in non-deterministic behavior.

References

Why does SynchronizedCollection assign this to a mutex? - Stack Overflow

1…91011…27

131 日志
119 标签
GitHub LeetCode
© 2025 Poison 蜀ICP备16000644号
由 Hexo 强力驱动
主题 - NexT.Mist