Algorithms in clinical decision tools have been making it harder for certain racial and socioeconomic groups to receive the healthcare they deserve.
A new systematic review reveals that only 5% of health care evaluations for large language models use real patient data, with significant gaps in assessing bias, fairness, and a wide range of tasks, ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results