Reinforcement-learning-based control of confined cylinder wakes with stability analyses

15 Nov 2021  ·  Jichao Li, Mengqi Zhang ·

This work studies the application of a reinforcement-learning-based (RL) flow control strategy to the flow past a cylinder confined between two walls in order to suppress vortex shedding. The control action is blowing and suction of two synthetic jets on the cylinder. The theme of this study is to investigate how to use and embed physical information of the flow in the RL-based control. First, global linear stability and sensitivity analyses based on the time-mean flow and the steady flow (which is a solution to the Navier-Stokes equations) are conducted in a range of blockage ratios and Reynolds numbers. It is found that the most sensitive region in the wake extends itself when either parameter increases in the parameter range we investigated here. Then, we utilise these physical results to help design RL-based control policies. We find that the controlled wake converges to the unstable steady base flow, where the vortex shedding can be successfully suppressed. A persistent oscillating control seems necessary to maintain this unstable state. The RL algorithm is able to outperform a gradient-based optimisation method (optimised in a certain period of time) in the long run. Furthermore, when the flow stability information is embedded in the reward function to penalise the instability, the controlled flow may become more stable. Finally, according to the sensitivity analyses, the control is most efficient when the probes are placed in the most sensitive region. The control can be successful even when few probes are properly placed in this manner.

PDF Abstract