As practitioners and researchers working in the Security field, we have seen firsthand the positive impact that AI can have in helping make our society better.
To name just a few, we have witnessed AI technologies detect and prevent terrorist attacks, disrupt human trafficking rings that were destroying the lives of young victims, locate perpetrators of child online sexual abuse and even prevent the dissemination of dangerous health disinformation that put the lives of thousands of people in peril.
We are believers in AI for the good of society. And we are also aware that our work will face challenges.
We have a duty and a responsibility to not only comply with the basics of the FATE criteria, but also to aspire to making AI work for Security stakeholders all across the spectrum.
We believe that Law Enforcement has the right to know how the AI tools it employs work, and the potential exposure that they leave them open to. We believe that they also have the responsibility to understand the tools they use and potential negative impacts it can have on the citizens they engage with.
We believe that citizens have the right to understand the implications of our technologies, both the good and the bad.
We believe that AI practitioners need to reinforce our role ambassadors for FATE, but also go a step beyond, to also be ambassadors for the potential that AI has to save lives and protect human safety. We should not look to be arbiters of what is ethical but rather work harder towards removing bias and error to the best of our ability, with solid human values driving our technological innovations.
If you believe in the same, join us.