no code implementations • 13 May 2024 • Shuo Yin, Weihao You, Zhilong Ji, Guoqiang Zhong, Jinfeng Bai
To fully leverage the advantages of our augmented data, we propose a two-stage training strategy: In Stage-1, we finetune Llama-2 on pure CoT data to get an intermediate model, which then is trained on the code-nested data in Stage-2 to get the resulting MuMath-Code.
no code implementations • 5 Dec 2022 • Shuo Yin, Guohao Dai, Wei W. Xing
Despite the fast advances in high-sigma yield analysis with the help of machine learning techniques in the past decade, one of the main challenges, the curse of dimensionality, which is inevitable when dealing with modern large-scale circuits, remains unsolved.