AI chatbots like OpenAI’s ChatGPT clearly have the potential to become useful tools in a lawyer’s toolbox. Of course, lawyers using AI chatbots, like lawyers using any other tools, must be mindful of their ethical obligations, including not only the duty to verify the accuracy of the results of legal research, but also the duty to maintain client confidentiality. In considering what client information can be used in prompting a chatbot, Rule 1.6 of the ABA’s Model Rules of Professional Conduct provides guidance, but not a bright-line rule.
Rule 1.6(a) provides, in pertinent part, that “[a] lawyer shall not reveal information relating to the representation of a client unless the client gives informed consent, the disclosure is impliedly authorized in order to carry out the representation or the disclosure is permitted by paragraph (b).” Thus, the primary question in prompting a chatbot is whether the prompt will “reveal information relating to the representation.”
To avoid an impermissible disclosure, a lawyer may be able to use hypothetical facts. Although Comment 4 to Rule 1.6 states that the prohibition on revealing information about the representation “also applies to disclosures by a lawyer that do not in themselves reveal protected information but could reasonably lead to the discovery of such information by a third person,” the Comment further states that “[a] lawyer’s use of a hypothetical to discuss issues relating to the representation is permissible so long as there is no reasonable likelihood that the listener will be able to ascertain the identity of the client or the situation involved.” Thus, a lawyer may be able to use a hypothetical to preserve client confidence, but only if the lawyer can be reasonably sure the hypothetical will not reveal the identity of the client.
Although ChatGPT has extraordinary capabilities, ChatGPT disclaimed any ability to determine the identity of a client when provided with a prompt asking whether it could identify a client based on hypothetical facts and a legal question about those facts:
No, I cannot ascertain the identity of your client based on hypothetical facts and a legal question. I don’t have the ability to access external databases, internet, or any real-time information beyond my last update in September 2021. Therefore, I can't match the hypothetical facts you provide to real-world events or individuals.
Moreover, if you’re concerned about confidentiality, I recommend providing only anonymized and generalized information and avoiding specific details that could potentially identify your client. Always prioritize your client’s privacy and the ethical guidelines of your profession [1].
ChatGPT’s response is notable because ChatGPT appears to be suggesting that it might be able to identify a client based on hypothetical facts and a legal question if given access to external databases, the internet, or real-time information beyond its last update in September 2021, and it reinforces the concern that confidentiality may be compromised if the user provides any more than anonymized and generalized information. In any event, lawyers using ChatGPT must carefully consider whether they may be jeopardizing client confidentiality for another reason.
Whether or not ChatGPT could ascertain the identity of any client, providing information to ChatGPT may jeopardize confidentiality either way because it could lead to a human reviewing a chat. A response to one of the frequently asked questions on OpenAI’s website states as follows:
A limited number of authorized OpenAI personnel, as well as specialized third-party contractors that are subject to confidentiality and security obligations, may view and access user content only as needed for these reasons: (1) investigating abuse or a security incident; (2) to provide support to you if you reach out to us with questions about your account; (3) to comply with legal obligations; or (4) when we fine tune our models using user-submitted data (unless you have opted out), we also use PII filtering techniques to reduce the amount of personal data used. Access to content is subject to technical access controls and limited only to authorized personnel on a need-to-know basis. Additionally, we monitor and log all access to user content and authorized personnel must undergo security and privacy training prior to accessing any user content [2].
Thus, ChatGPT likely falls into the category of “cloud computing,” which has been the subject of numerous state bar ethics opinions regarding confidentiality considerations in using cloud computing [3]. For example, the Connecticut Bar Association has defined “cloud computing” for purposes of its opinion as including “the storage, transmission, and processing of data (client information) using shared computer facilities owned or leased by a third-party service provider.” Recognizing the benefits of cloud computing, the Connecticut Bar Association concluded that Connecticut lawyers may use cloud computing as long as they “make reasonable efforts to meet their obligations to preserve the confidentiality of client information and to confirm that any third-party service provider is likewise obligated.”
[3] See, e.g., Informal Opinion 2013-07, Cloud Computing, Connecticut Bar Association (June 19, 2013) (collecting ethics opinions from numerous states).