Archived version: https://archive.ph/sNqZT
Archived version: https://web.archive.org/web/20240301021006/https://www.theguardian.com/world/2024/feb/29/canada-lawyer-chatgpt-fake-cases-ai
Archived version: https://archive.ph/sNqZT
Archived version: https://web.archive.org/web/20240301021006/https://www.theguardian.com/world/2024/feb/29/canada-lawyer-chatgpt-fake-cases-ai
Even if you did that’s no guarantee that the model won’t hallucinate. It might just hallucinate better. Always manually verify everything that is important to you.
Yes. The output could include some kind of ID or case number for the user to manually verify.