GLM 4.7 (inside copilot chat) wrote macros for me for a day, only reported <5 error.

krsto e. shared this idea 3 hours ago
Voting Open

hi,

i already wrote it out in the thread about my recent project

https://community.mindmanager.com/topic/3184-everyone-should-install-vscodesome-llms-writing-macrosaddons-just-got-easier-mm-anki


but i really really wanna emphasis on this.


becoz from 2023 to last month, western LLMs like GPT, claude, gemini, grok;

no matter web or cli version, they wrote mindmanager macros that looks good

but run with error (the macro editor will report to you).

if you feed the error back into the LLM, hoping it will solve for you, in majority of times,

they simply fell into an infinite error loop.


recently in my project, i even deployed claude opus 4.5.

it's so expensive that after i just chit chat with it 3 rounds? it will already use up my claude Pro quota. but even i let opus 4.5 code overnite, the code just do not run, as above.

i used up my claude opus 4.5, sonnet 4.5 quotas; then i never have good time w codex (tried, may be good for read your code for error but not write code). for gemini cli, quite many times the codes also dont run.


then i turn go GLM 4.7, becoz it' s ranking is about 4-5 in LLMarean's leatherboard.

wtf the code runs. and then i used it to code for a day.

only for 1 time, the code report w error, however, when feedback into GLM 4.7 and ran the code again, it worked.


i am not sure what is unique to GLM 4.7.

but i'll wanna say, if u wanna write MM macros, GLM 4.7 is my best choice.

and personally, if you use GLM 4.7 to code; then use gemini 3 pro as supervisor,

the successful rate is even higher.


i really really really disappointed on opus 4.5, becoz ppl claim that it's senior then human,

"98 % correct". no , for mm macros, it's GLM 4.7 which got that level.

Leave a Comment
 
Attach a file