We’re excited to announce we’ve closed a $30M Series B to lead the future of eliminating AI risk 🚀🚀🚀 Read more about how we plan to do this in our recent interview with
Curious to learn how executives at leading companies instill integrity in their ML systems?
Hear about the strategies and engineering paradigms used to eliminate model failure at Equinix, JPMorgan, Splunk, Survey Monkey, Wells Fargo and more!
Register for ML:Integrity on Oct 19
Join industry leaders at the ML:Integrity conference!
It's a free, virtual conference with speakers from @awscloud, @huggingface, @DeepMind, @Reddit, @PayPal and more!
Register for free here: https://mlintegrityconference.com
Join the first event dedicated to advancing ML integrity.
ML:Integrity is a free, virtual conference about:
1. ML Failure Prevention
2. Open Source ML
3. Compliance & Regulation
4. ML Quality Control & ML Security
5. Bias in ML
Register for free here: https://mlintegrityconference.com
As part of #CybersecurityAwarenessMonth, we're happy to host a security panel on ML best practices as part of our ML:Integrity conference.
Join industry experts
Adversarial attacks vary in sophistication from simple and manual to algorithmic and complex. That's why it's important for security and data science teams to include AI in their attack surface and learn how to prevent bad actors from causing harm.
With AI adoption comes AI risk. One of the symptoms of AI risk is failure due to attack. AI systems aren't just vulnerable because of the software, data or model supply chain during development, they are also vulnerable post-deployment.
is speaking on the 'Keeping it Ethical in AI' panel.
We'll be at booth 34. Stop by to say hi, and ask us how we help companies eliminate model failure through ML integrity.
#aiandbigdataexpo
This year’s ML:Integrity conference will be held virtually on Wednesday, October 19. The agenda is slated to include over a dozen talks on ML fairness, security, scale, regulation, and more.
Check out our blog post and register today!
https://robustintelligence.com/blog-posts/introducing-ml-integrity…
The pursuit of integrity in ML is shared by the data science community. As such, we’re excited to announce ML:Integrity, an annual conference that will serve as a forum for industry leaders to share their perspectives and best practices, as well as advocate for standards.
It’s no secret today’s machine learning models fail frequently, which can have dire consequences when used to make critical decisions. This is why companies are actively building strategies and engineering paradigms to instill ML integrity in their systems.
Introducing the ML Model Attribution Challenge, a technical competition designed to spur creative approaches to identifying the true origin of fine-tuned LLMs.
The competition kicks off at
Annual Summit tomorrow (06/09): “Eliminating AI Risk, One Model Failure at a Time”. If you’re in attendance, stop by Grand West to see it live and follow-up with Yaron at booth 16!
We were selected to exhibit at the DoD Digital and AI Symposium! For those working with #AI to advance national security, drop by our virtual booth June 7-8. We look forward to sharing how we safeguard #ML models to eliminate AI failures. #DODCDAO#DIGAI2022
How can we build AI systems that are more robust, safer, and savvier at tackling faulty data? In our latest TDS Podcast episode, host @jeremiecharris chatted about these (and other) topics with @robusthq CEO @yaronsinger. https://buff.ly/3rD8b9n
The man who’s restoring common logic to artificial intelligence.
After uncovering the inconvenient truth about AI, Yaron Zinger decided to walk away from a promising career at Harvard and set up start-up
New episode of Practical AI! Eliminate AI failures
with @yaronsinger from @robusthq hosts @chrisbenson@dwhitena#ai#machinelearning#datasciencehttps://practicalai.fm/163