If you’re using AI in your business, you won’t want to miss this webinar! AI models are trained to incorporate training data, which could include a customer’s propriety information or other confidential data. The correct input prompt could cause the model to spit out sensitive data, and a malicious actor could intentionally query the model to re-generate this sensitive training data. Similarly, an innocent user could inadvertently cause the model to replicate this sensitive training data. Join IP expert, Robert Baker from Smart & Biggar for a deep dive on on how to avoid these “training data extraction attacks,” with a discussion about the legal risks to the model owner.
Register here: https://www.eventbrite.ca/e/unforgettable-ai-training-models-data-leaks-tickets-1113762356189