Transparency in Supporting Trust and AI for Security Learn More
Watch: Our Mission in 2 Minutes
Dataietica is a non-profit institute founded by AI practitioners and researchers in the Security industry who want to fully realise the potential of new technologies for public safety and policing, but also understand the potential implications for such innovations in terms of ethics, privacy and human rights.

What is Dataietica?

Supporting the Entire AI Security Community

With Research and Multi-stakeholder Education

The rationale behind the Dataietica approach is that in order to guarantee the integrity of human beings’ civil liberties and privacy while at the same time taking advantage of the positive impact that AI can have on protecting human lives, all players in the AI security space – Law Enforcement, AI researchers, developers and regular citizens – must be taken into account and the concerns and needs of each fostered and stimulated appropriately. 

In many cases, the debate around ethics, transparency and trust in AI is limited to the niche of either research or is left as an open question for AI developers and practitioners.

Dataietica brings together all impacted groups through research, training and awareness building efforts designed to help advance our mission. 

The capacity of AI to change the nature of policing and improve performance and effectiveness, such as identifying persons of interest in crowded spaces, forecasting and predicting crime; and monitoring for drivers of violent extremism is only beginning to be seen. Here are few examples of how AI can keep us safe.

Preventing Terrorist Attacks

AI can help predict threats, such as terrorist attacks, so authorities can act before they happen, saving lives.

Prevention of Online Radicalization

Over 80% of deadly terror attacks committed in the West were perpetrated by individuals who were radicalised online. AI can help detect and combat these dangerous radicalisation processes.

Investigating and Solving Crimes

Many crimes such as human trafficking, illegal arms sales, and drug trafficking happen in plain site – or on the Dark Web – online. AI can help detect and investigate these dangerous activities.

Combatting Dangerous Misinformation

Disinformation – or “fake news” – has the potential to compromise democracy and endanger human lives. AI is capable of detecting this type of content in order to ensure that citizens are consuming only factual content.

Protecting Public Health

Misinformation is also responsible for the endangerment of public health, such as false information about vaccines. In addition, AI can also help combat the illegal sales of fake or unsafe pharmaceuticals or vaccines.

Detecting and Preventing Abuse and Human Trafficking

Vulnerable victims are victimised online daily, the result of the tactics of criminals who lure unsuspecting women and children into dangerous situations under false pretenses.
1

AI Bias

AI spans a number of methods and technologies, techniques and approaches. This heterogeneity poses additional challenges; for example, the inherent ethical problems found in the use of Natural Language Processing algorithms for text analysis, such as bias in data annotation for training sets, are different than those found in the use of facial recognition technologies, such as racial bias and privacy concerns.

2

Transparency and Explainability

The lack of transparency and accountability of machine learning models undermines human trust. And it can have severe consequences; there have been cases of people incorrectly denied parole, wrong decisions leading to the release of dangerous criminals, and generally poor use of limited valuable resources in criminal justice, medicine, finance and in other domains due to the opaque nature of “black box” AI models.

3

Privacy

The use of AI tools by law enforcement for investigative and preventive actions has tremendous positive impact, but raises a series of concerns in the minds of the public, and poses challenges both for security agencies as well as the developers of AI solutions. The limits of what is public information, what rights citizens have over their private information and where it is stored or how it is used seems to vary, and the implications of this on personal privacy is a concern that must be addressed. 

4

Model Validity and Obsolescence

AI models are trained with contextual data. Poor quality data or data that is not sufficiently varied, results in misleading or erroneous outputs from AI systems. Models are influenced by the environment and context. In addition, every AI model is trained with data from a specific moment in time – usually data collected as the model is being trained, although sometimes the data used is dated from months or even years before. These issues pose a challenge to the validity of AI models, thus influencing their accuracy and fairness, especially in the case of outdated models. 

5

Lack of a Clear Framework and Guidelines for Developers

Some AI developers are aware of the explainability and interpretability problems, but they aren’t always top of mind. While many have the best of intentions, companies have commercial interests at the forefront of their pursuits, and the issue of ethical AI development and explainability is often the last on the list of priorities when carrying out product development. There is an urgent need to educate AI developers on the impacts of ignoring ethical issues such as model bias, discriminatory algorithms and other prominent problems, and help AI businesses avoid these pitfalls.

6

Law Enforcement Skepticism

While typically policing is absent in popular speculations about “being replaced by robots” or AI automation, there is indeed a degree of skepticism among Law Enforcement Agencies to overcome in this regard. Beyond general mistrust of the accuracy of AI recommendations and a fear of job loss, there is serious concern that police officers using AI for their daily work will leave them exposed to ethical and legal implications as a result of a lack of clear guidelines for its use.

How We Help

AI Transparency Research Hub

Dataietica is an AI research hub, focusing on collaborative research projects that have as an objective the advancement of transparent, trustworthy and and ethical AI for public safety use cases. 

Training for Law Enforcement and Security Agencies

We develop training for Law Enforcement Agencies looking to get started on the right foot with using AI in their work, as well courses designed for helping law enforcement understand the ethical implications of AI. 

Training for AI Practitioners and Researchers

We design opportunities for training in best practices in ethical algorithm development, as well as offer a space for conversation between researchers and practitioners to share experiences and challenges for the enrichment of the community.

Security AI for NGOs and Private Citizens

We offer resources and assistance to individuals, citizens groups, civil society organisations and any other interested parties in order to build a better understanding of AI in Security as a whole.

Pro-bono Helpdesk

Dataietica offers pro-bono advice to AI practitioners and researchers looking to implement a more sound strategy for protection against bias and opaqueness in their projects. 

We also counsel Law Enforcement and governments on best practices for implementing AI solutions for public safety purposes.

Dataietica also aims to be a resource for private citizens who are curious about, or even concerned about, the potential that AI has for public safety. 

Focus Areas

Fighting Bias

Bias can be built into AI, and algorithmic fairness is something all stakeholders should be concerned about. To achieve fairness there needs to be careful consideration of the wider operational, organisational and legal context, but also the potential negative personal impacts that opaque and biased AI can have on citizens. Our research aims to tackle these important issues.

Citizen Trust

The potential of AI for keeping society safe and protecting human life can only be fully realised with the trust, buy-in and cooperation of citizens, who are impacted by these technologies. This consent can only happen if regular people understand how AI works, its potential positive impact and its pitfalls – through education and full transparency. 

LEA Trust

Law enforcement authorities face a number of challenges in AI use: they must work to ensure that their use of AI complies with the Fairness, Accountability, Transparency and Explanation (FATE) criteria in order to be able have confidence in both its efficacy as well as its legality, but they often lack sufficient information. We believe that supporting LEA in this regard will help advance fairness in policing. 

Developer Ethics

In many cases, the debate around ethics, transparency and trust in AI is limited to the niche of either research or is left as an open question for AI developers and practitioners. We support AI practitioners working in Security in the adoption of ethical algorithm development and human-first AI innovations.

Our Outreach

Training for LEA and Government

We offer training for LEA and government stakeholders in the use of AI for security and policing, with a focus on ethical and transparent foundations.

Training for AI Practitioners

We develop training and resources for AI practitioners and researchers looking to better understand the pitfalls of lack of transparency and bias in the development of AI tools for security purposes. 

Citizen Resources: How AI Can Help

We help citizens understand AI for Security as well as the potential implications it might have on them, so that we can all be fully informed. 

Contact

Get in Touch