-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathfindings.tex
28 lines (13 loc) · 3.51 KB
/
findings.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
\section{42 experiments under the ethical looking glas}\label{}
After reviewing a set of ten first projects, we already published a first report on our findings as a public report. In this report \cite{DXX}, we have already summarized many of the challenges. In this whitepaper and 32 experiments later, we would like to briefly summarize the areas that appeared critical during our ethics monitoring.
As a methodology, we reviewed all 42 project at start, midterm, and after finalization. We did this as a group of technical experts with a lot of practical experience in the area of data innovation, who was advised by a legal expert. As we started, we had little tools and were expecting to focus on rather theoretical edge cases of unintended use of data and AI models. However, as we discovered, and this is probably the major finding of our extensive experimentation, the majority of applications are touching real hazards around data. In most of those cases, the biggest problem was that it remained unclear from the initial documentation if real ethical risks could be foreseen. As a dynamic of any funding application, one can easily imagine the positive impacts and the foreseen scaling of data processing. What can be said in general: The negative impact and adversarial effects foreseen were clearly disproportionate for most projects at the start of experimentation. We believe that ethics by design and default first of all requires awareness of such hazards, which we tried to provide as feedback to the experiments.
\subsection{Data Protection of Personal Data}\label{}
Examples, examples, examples
\subsection{Potential adversarial effects of the use of AI}\label{}
Examples, examples, examples
\subsection{Dual use of data and ai}
Knowledge, as well as information and technology that can be used to derive that knowledge, can be very powerful. At least since the beginning of the Ukraine war, we have realized that we need to protect critical infrastructure and that export control can have a real impact on history.
Similarly a company working, e.g. on drone technology as part of our experiment, typically knows potential dual use issues. However, we have seen that there is a reluctance to talk too much about these aspects openly in the industry. From a compliance point of view, we can see in the regulation that drones that can fly further than 300 km autonomously are technically considered rockets. So legally, we just needed to ensure that we do not develop technology that allows autonomous flight beyond that point. However, from an ethical perspective, we should more generally look at the underlying dual-use risks when developing technology to support autonomous flight. Again, as discussed before, in the end clear arguments can be found to limit the potential impact of misuse to an acceptable level; however, we also see in this case that instead of discourage discussion on dual use, we need to raise awareness and show mitigation to gain trust in innovations. Particularly if innovation is done in a very diverse ecosystem, also the innovation hubs that support, e.g., with machine vision technology, need to understand the potential impact of their AI models.
Also, the data processed and used is an important factor. While operators are trying to make sure that their infrastructure is not considered "critical infrastructure" in the sense of the law, the view on potential misuse of data might shift. Data acquired from monitoring infrastructure can well be used to attack it.
\subsection{Data and competition}
A big underlying problem that