So when you talk about developing disease treatment, to the extent that AI is involved, it’s not Generative AI, some other machine learning techniques, with limitations. E.g. AlphaFold is pretty good at predictions for some proteins, but will fall apart for certain classes. Useful with limitations.
When you have help diagnose, then maybe you are in generative AI territory, and maybe useful to help find medical research that is relevant the doctor could not have kept with on their own, however it shouldn’t be a crutch, and getting caught up in trying to get an answer out of LLM can be just as bad as trying to get a sane answer out of it for anything else. So maybe useful if the Doctor’s think it’s supremely stupid but it did manage to identify actually relevant source material for an unrecognized problem. Other than LLM, then maybe the more ‘traditional’ AI approaches can help with things like quick check on imaging that might have otherwise been skipped (if we actually had enough quality, labeled stuff for asymptomatic problems in scans, which I don’t think we do). Might be able to identify more complex patterns in bloodwork, but again, would have to be trained in nuanced ways I don’t think we are equipped to do.
Prevent crime is a tough one. I don’t think I’ve seen anything resembling success above and beyond a human understanding of crime frequencies in an area, which is generally self-evident by looking at a map of incident reports without an AI saying anything. I know they tried to predict recidivism based on data about a subject, but that was a colossal failure.
The general conundrum is that generative AI is unreliable and not generally more magical than a pretty dumb human taking a look at fairly obvious visualizations. You need use cases where you have some potential improvement that wasn’t worth human attention prior. For example, hypothetically, if you needed to search for a literal needle in a haystack, an effort that human-wise wouldn’t be worth it, an AI approach could help you maybe find it. It might identify a hundred straws of hay as needles and may even miss the needle entirely, but there’s at least some chance it brings the problem down to practical reach of a human, so long as it’s not that important if the needle can’t be found anyway.
The issue with ‘AI’ is that it is so broad.
So we have Generative AI and other AI.
So when you talk about developing disease treatment, to the extent that AI is involved, it’s not Generative AI, some other machine learning techniques, with limitations. E.g. AlphaFold is pretty good at predictions for some proteins, but will fall apart for certain classes. Useful with limitations.
When you have help diagnose, then maybe you are in generative AI territory, and maybe useful to help find medical research that is relevant the doctor could not have kept with on their own, however it shouldn’t be a crutch, and getting caught up in trying to get an answer out of LLM can be just as bad as trying to get a sane answer out of it for anything else. So maybe useful if the Doctor’s think it’s supremely stupid but it did manage to identify actually relevant source material for an unrecognized problem. Other than LLM, then maybe the more ‘traditional’ AI approaches can help with things like quick check on imaging that might have otherwise been skipped (if we actually had enough quality, labeled stuff for asymptomatic problems in scans, which I don’t think we do). Might be able to identify more complex patterns in bloodwork, but again, would have to be trained in nuanced ways I don’t think we are equipped to do.
Prevent crime is a tough one. I don’t think I’ve seen anything resembling success above and beyond a human understanding of crime frequencies in an area, which is generally self-evident by looking at a map of incident reports without an AI saying anything. I know they tried to predict recidivism based on data about a subject, but that was a colossal failure.
The general conundrum is that generative AI is unreliable and not generally more magical than a pretty dumb human taking a look at fairly obvious visualizations. You need use cases where you have some potential improvement that wasn’t worth human attention prior. For example, hypothetically, if you needed to search for a literal needle in a haystack, an effort that human-wise wouldn’t be worth it, an AI approach could help you maybe find it. It might identify a hundred straws of hay as needles and may even miss the needle entirely, but there’s at least some chance it brings the problem down to practical reach of a human, so long as it’s not that important if the needle can’t be found anyway.