Professionals (using the term loosely) are using LLMs to draft emails and reports, and then other professionals (?) are using LLMs to summarise those emails and reports.
I genuinely believe that the general effectiveness of written communication has regressed.
I’ve tried using an LLM for coding - specifically Copilot for vscode. About 4 out of 10 times it will accurately generate code - which means I spend more time troubleshooting, correcting, and validating what it generates instead of actually writing code.
It’s not just the internet.
Professionals (using the term loosely) are using LLMs to draft emails and reports, and then other professionals (?) are using LLMs to summarise those emails and reports.
I genuinely believe that the general effectiveness of written communication has regressed.
I honestly wonder what these sorts of jobs are. I feel like I have barely any reason to use AI ever in my job.
But this may because I’m not summarising much, if ever
AI can’t think, and how long emails are people writing to ever make the effort of asking the AI to write something for you worth it?
By the time you’ve asked it to include everything you wanted, you could have just written the damn email
I’ve tried using an LLM for coding - specifically Copilot for vscode. About 4 out of 10 times it will accurately generate code - which means I spend more time troubleshooting, correcting, and validating what it generates instead of actually writing code.
I use it to construct regex’s which, for my use cases, can get quite complicated. It’s pretty good at doing that.
Apparently Claude sonnet 3.7 is the best one for coding
I like using gpt to generate powershell scripts, surprisingly its pretty good at that. It is a small task so unlikely to go off in the deepend.