Tag Archives: software

The perpass list is for IETF discussion of pervasive monitoring. IETF specifications need to be designed to protect against pervasive monitoring where possible. This list is intended for technical discussions attempting to meet that goal. Discussion is limited to specific technical proposals for improvements in IETF protocols, their implementation or deployment and to IETF process changes aiming to increase the liklihood that development, implementation and deployment of IETF protocols results in better mitigation for pervasive monitoring. Those with proposals are encouraged to embody them in detailed internet-draft specifications, rather than relying solely on email messages. The typical modus-operandi of the perpass list should be to identify a credible piece of work, with identified volunteer effort, and then to find a home for that work within the IETF. Once such a home is identified work should move to whatever other lists are relevant.

“What is ethics doing in a course for software engineers? Like medical, legal and business ethics, engineering ethics is a well-developed area of professional ethics in the modern West. The first codes of engineering ethics were formally adopted by American engineering societies in 1912-1914. In 1946 the National Society of Professional Engineers (NSPE) adopted their first formal Canons of Ethics. In 2000 ABET, the organization that accredits university programs and degrees in engineering, began to formally require the study of engineering ethics in all accredited programs: "Engineering programs must demonstrate that their graduates have an understanding of professional and ethical responsibility.”
Professional engineers today, then, are expected to both learn about and live up to ethical standards as a condition of their membership in the profession.”

Software Engineering Ethics

We’re soon to release the policy report mentioned in this excellent post by my old friend and former colleague Sam Gregory, and as part of the general digging around for the report, I came across this elegant run-down of the interactions between information technologies and privacy, seen from a human rights perspective.  I hope our report and thinking comes out as taut and informative:

Information technologies both extend and diminish personal control over the boundaries of the private. They extend privacy because they offer new means to set personal boundaries, to alter and project identity, and to participate and associate in the public sphere. Mobile phones and cameras, internet commerce, and social websites all harness and organise data to these ends. Data is likewise gathered in health databases to extend lifespan and manage disease, to monitor and enforce personal security, and so on. All these innovations can bolster the capacity of individuals to act autonomously.

On the other hand, technological advance challenges personal autonomy, traditionally understood. Private individuals neither manage nor own the technologies they increasingly depend upon. Personal privacy is (or is experienced as) threatened in four ways. First, the architecture of data and communications systems categorizes individuals and their attributes in novel and predetermined ways, for functional purposes that refashion personal profiles along terms created and administered by third parties. Second, the systems are now so advanced and complex that modern users do not and cannot expect to comprehend their functioning and adjustment, the amount and kind of data collected, who has access to it, and how access and usage is governed, if at all. Third, the IT revolution has been accompanied by a transfer of the management of public infrastructures into private hands. Whereas individuals previously entrusted the policing of their private spheres to public actors (the police, post and telecommunications services, public health services and so on), albeit guardedly, today it is not clear whether individuals expect the private sector to defend the security of their personal information from the state, or, conversely, expect the state to protect them from private abuses. Fourth, ordinary safeguards of the kind traditionally used to monitor governments tend to fail in a world where data flows barely recognise national jurisdictions.

(I also posted this a while back over on my web-foragings blog.)