Reading 14: #CS4All
While I don't think it's completely necessary that everyone take a computer science or coding class, I do think that the knowledge acquired in those classes are very useful to any person living in this day and age in a developed country.
We're surrounded by technology - we use it to communicate, to work, for entertainment, to create things, to track our fitness, to help with everyday chores... the list goes on. I think that basic knowledge about computers and coding is useful to anyone solely for the purpose of having a greater understanding of how things work.
For that reason, I think it's wise for schools to be teaching some basics of computer science. I wouldn't put coding at the level of "new literacy" - I don't think coding should have as much of a focus as there is on reading, writing and speaking, but I do think some basis of coding is very beneficial for everyone - both from an academic and personal perspective.
Some of the arguments for teaching all students programming have to do with equity. As we’ve discussed previously in class, the tech world is dealing with a lack of diversity. The CSForAll Movement, however, would give kids the same, basic level of computer fluency. It has the potential to create a more even playing field. Students who otherwise may have not tried CS classes due to lack of representation or economic barriers would have the opportunity to try CS, and possibly change their minds or open their eyes to a new passion. In the article about Chicago Public Schools adopting CS as a mandatory part of their curriculum, Emile Chambry (founder and director of tech incubator Blue1647) makes a point along these lines, saying, “"You're getting a lot of underrepresented students, a lot of minorities, a lot of girls that otherwise might have missed that opportunity.”
Attracting more underrepresented students to the field of computer science also has the potential to be a tool for increasing socio-economic mobility. As long as wages in the computer science field remain relatively high, marginalized groups who decide to pursue a career in CS will have more economic opportunities.
Apart from these reasons, the main push for CSForAll is the skills that would be developed. Whether or not students go into a CS-related field, a working knowledge of computers and programming languages is extremely beneficial in today’s world. And, apart from the practical sense of learning to code, there’s the added learning of how to solve problems and deal with failures.
Although the CSForAll movement sounds enticing, there are some drawbacks. One of the main ones being concern about a lack of teachers who can teach code, and the amount of funding it would take to have workshops and conferences to give K-12 teachers the skills they need to teach a CS class. Learning initiatives are only as strong as the teachers and leaders behind them; in theory it’s a great idea to teach all kids some basic programming, but experienced teachers are needed to take on that job. Considering the already existing deficient teacher salaries, it’s hard to imagine where the funding for programs like this would come from.
There’s also the issue of how CS classes should be implemented. If school districts or states decide to implement CS courses, should they be elective or requirements? According to the article about CPS, there are currently 28 states, including Illinois, and Washington D.C. which allow computer science courses to count toward math or science graduation requirements. The Chicago Public Schools system, however, has made it a graduation requirement that each student pass one credit of computer science. In other parts of the country, such as Florida, CS classes are being offered as an alternative to a foreign language.
Personally, as someone who has studied Spanish for eight years and been an exchange student twice, I feel strongly that CS shouldn’t be offered as an alternative to a foreign language. One of the most challenging aspects of learning a language is learning to speak it, which isn’t an aspect of learning a programming language. I think that offering CS classes as a math or science elective is a good option. While I feel that taking programming classes has given me very valuable skills, I don’t think it should be pushed on everyone. I think that math and science are challenging in similar ways to computer science, and students should have the right to choose.
Regarding what the course(s) should involve, I think that more user-friendly languages like Python are a great way to start. I also think it’s worthwhile to teach other general computer-related topics such as privacy and security, computer ethics, and even excel since it is so widely used (I wish I had gotten experience in Excel prior to college).
I do believe that anyone can learn to program. I think it is similar to learning mathematics in that everyone is capable of learning if they have a good teacher and they’re willing to work hard. Some people may have more of a natural talent or interest than others, but everyone is capable of learning.
Overall, I don’t think that everyone necessarily should to learn to program, but I do think it is a very useful thing to learn whether or not someone goes into computer science.
We're surrounded by technology - we use it to communicate, to work, for entertainment, to create things, to track our fitness, to help with everyday chores... the list goes on. I think that basic knowledge about computers and coding is useful to anyone solely for the purpose of having a greater understanding of how things work.
For that reason, I think it's wise for schools to be teaching some basics of computer science. I wouldn't put coding at the level of "new literacy" - I don't think coding should have as much of a focus as there is on reading, writing and speaking, but I do think some basis of coding is very beneficial for everyone - both from an academic and personal perspective.
Some of the arguments for teaching all students programming have to do with equity. As we’ve discussed previously in class, the tech world is dealing with a lack of diversity. The CSForAll Movement, however, would give kids the same, basic level of computer fluency. It has the potential to create a more even playing field. Students who otherwise may have not tried CS classes due to lack of representation or economic barriers would have the opportunity to try CS, and possibly change their minds or open their eyes to a new passion. In the article about Chicago Public Schools adopting CS as a mandatory part of their curriculum, Emile Chambry (founder and director of tech incubator Blue1647) makes a point along these lines, saying, “"You're getting a lot of underrepresented students, a lot of minorities, a lot of girls that otherwise might have missed that opportunity.”
Attracting more underrepresented students to the field of computer science also has the potential to be a tool for increasing socio-economic mobility. As long as wages in the computer science field remain relatively high, marginalized groups who decide to pursue a career in CS will have more economic opportunities.
Apart from these reasons, the main push for CSForAll is the skills that would be developed. Whether or not students go into a CS-related field, a working knowledge of computers and programming languages is extremely beneficial in today’s world. And, apart from the practical sense of learning to code, there’s the added learning of how to solve problems and deal with failures.
Although the CSForAll movement sounds enticing, there are some drawbacks. One of the main ones being concern about a lack of teachers who can teach code, and the amount of funding it would take to have workshops and conferences to give K-12 teachers the skills they need to teach a CS class. Learning initiatives are only as strong as the teachers and leaders behind them; in theory it’s a great idea to teach all kids some basic programming, but experienced teachers are needed to take on that job. Considering the already existing deficient teacher salaries, it’s hard to imagine where the funding for programs like this would come from.
There’s also the issue of how CS classes should be implemented. If school districts or states decide to implement CS courses, should they be elective or requirements? According to the article about CPS, there are currently 28 states, including Illinois, and Washington D.C. which allow computer science courses to count toward math or science graduation requirements. The Chicago Public Schools system, however, has made it a graduation requirement that each student pass one credit of computer science. In other parts of the country, such as Florida, CS classes are being offered as an alternative to a foreign language.
Personally, as someone who has studied Spanish for eight years and been an exchange student twice, I feel strongly that CS shouldn’t be offered as an alternative to a foreign language. One of the most challenging aspects of learning a language is learning to speak it, which isn’t an aspect of learning a programming language. I think that offering CS classes as a math or science elective is a good option. While I feel that taking programming classes has given me very valuable skills, I don’t think it should be pushed on everyone. I think that math and science are challenging in similar ways to computer science, and students should have the right to choose.
Regarding what the course(s) should involve, I think that more user-friendly languages like Python are a great way to start. I also think it’s worthwhile to teach other general computer-related topics such as privacy and security, computer ethics, and even excel since it is so widely used (I wish I had gotten experience in Excel prior to college).
I do believe that anyone can learn to program. I think it is similar to learning mathematics in that everyone is capable of learning if they have a good teacher and they’re willing to work hard. Some people may have more of a natural talent or interest than others, but everyone is capable of learning.
Overall, I don’t think that everyone necessarily should to learn to program, but I do think it is a very useful thing to learn whether or not someone goes into computer science.
Reading 13: Intellectual Property
Copyright is defined by Wikipedia as, “a legal right created by the law of a country that grants the creator of an original work exclusive rights for its use and distribution. This is usually only for a limited time.”
There are some obvious benefits to having copyright laws - they protect the creators of goods by ensuring they have the exclusive rights to their creations, and therefore allowing them to benefit from those creations instead of having their ideas taken from them and profited on by other people. It seems both ethically just and economically beneficial to protect creators and their creations.
However, copyright laws have drawbacks. Especially in the world of software, there can be time where copyright hinders innovation and is unnecessarily constraining to developers and creators. We mentioned in class that we “stand on the shoulders of giants”, meaning that current advances in software wouldn’t be possible without taking ideas from, taking code from, learning from and collaborating with past advances in the field. To truly not copy or steal ideas from others in the world of software development is essentially impossible.
Open Source versus Proprietary License
There are situations when open source is preferable to a proprietary license. Wikipedia, for example is one of the most well-known open source operations. Due to the extensive information that an encyclopedia is expected to have, it makes sense in the situation of Wikipedia that it would be open source. Proprietary licenses make more sense when the creator of the project/application desires to own it in order to make a profit from it. There are sometimes when these two objectives are both desirable - having many people collaborating and sharing knowledge on the same subject and desiring to make a profit and/or keep trade secrets.
I don’t think that open source software is inherently better. With regards to cases like HeartBleed and ShellShock, security vulnerabilities will always be an issue. Just because open source platforms may be vulnerable to certain things by nature, I don’t see that as a reason to stop open-sourcing. The benefits that open-source platforms provide - innovation, sharing of knowledge, collaboration - are all too valuable to be stopped due to threats that, quite frankly, will likely always be potential for concern. Because there are concerns about security vulnerability related to open-sourcing, however, there is reason to carefully consider whether or not something should be open-sourced or not.
“Free Software” versus “Open Source”
From the readings about ‘free software’ and ‘open source’, they seem to be the same thing in practice. The only differences I see lie in technicalities, wording and intentions behind the different words. The idea of free software is credited to Richard Stallman, with the idea behind it being that it differs from open source in the sense that it isn’t just meant to be open to viewing, but open to collaboration and changes, and anyone can profit from changes made.
Between GPL and BSD licensing, BSD licensing seems to be more free. With GPL licensing, a licensee has no right to redistribute any modified versions of the free software that they create, whereas with BSD licensing, “allows proprietary use and allows the software released under the license to be incorporated into proprietary products”. In a way, BSD licensing is more free to individuals because they can feel free to create software and have ownership of their creation, while GPL is more free in the societal sense since any modifications to GPL-licensed software cannot be owned by the creator, but rather, shared with all.
Case Study: Google versus Oracle
In the case of the lawsuit between Google and Oracle, I think that the court is shouldn’t have ruled that API’s are copyrightable. In the long-run, copyrighting API’s will only hinder innovation as well as cause numerous expensive lawsuits. In this debate, I tend to side with the view expressed by Electronic Frontier Foundation legal director Corynne McSherry in the reading from Wired The Case that Never Ends: Oracle wins latest round vs. Google: “‘This creates a tremendous incentive for lawyers and copyright trolls to look for litigation.’”
There are some obvious benefits to having copyright laws - they protect the creators of goods by ensuring they have the exclusive rights to their creations, and therefore allowing them to benefit from those creations instead of having their ideas taken from them and profited on by other people. It seems both ethically just and economically beneficial to protect creators and their creations.
However, copyright laws have drawbacks. Especially in the world of software, there can be time where copyright hinders innovation and is unnecessarily constraining to developers and creators. We mentioned in class that we “stand on the shoulders of giants”, meaning that current advances in software wouldn’t be possible without taking ideas from, taking code from, learning from and collaborating with past advances in the field. To truly not copy or steal ideas from others in the world of software development is essentially impossible.
Open Source versus Proprietary License
There are situations when open source is preferable to a proprietary license. Wikipedia, for example is one of the most well-known open source operations. Due to the extensive information that an encyclopedia is expected to have, it makes sense in the situation of Wikipedia that it would be open source. Proprietary licenses make more sense when the creator of the project/application desires to own it in order to make a profit from it. There are sometimes when these two objectives are both desirable - having many people collaborating and sharing knowledge on the same subject and desiring to make a profit and/or keep trade secrets.
I don’t think that open source software is inherently better. With regards to cases like HeartBleed and ShellShock, security vulnerabilities will always be an issue. Just because open source platforms may be vulnerable to certain things by nature, I don’t see that as a reason to stop open-sourcing. The benefits that open-source platforms provide - innovation, sharing of knowledge, collaboration - are all too valuable to be stopped due to threats that, quite frankly, will likely always be potential for concern. Because there are concerns about security vulnerability related to open-sourcing, however, there is reason to carefully consider whether or not something should be open-sourced or not.
“Free Software” versus “Open Source”
From the readings about ‘free software’ and ‘open source’, they seem to be the same thing in practice. The only differences I see lie in technicalities, wording and intentions behind the different words. The idea of free software is credited to Richard Stallman, with the idea behind it being that it differs from open source in the sense that it isn’t just meant to be open to viewing, but open to collaboration and changes, and anyone can profit from changes made.
Between GPL and BSD licensing, BSD licensing seems to be more free. With GPL licensing, a licensee has no right to redistribute any modified versions of the free software that they create, whereas with BSD licensing, “allows proprietary use and allows the software released under the license to be incorporated into proprietary products”. In a way, BSD licensing is more free to individuals because they can feel free to create software and have ownership of their creation, while GPL is more free in the societal sense since any modifications to GPL-licensed software cannot be owned by the creator, but rather, shared with all.
Case Study: Google versus Oracle
In the case of the lawsuit between Google and Oracle, I think that the court is shouldn’t have ruled that API’s are copyrightable. In the long-run, copyrighting API’s will only hinder innovation as well as cause numerous expensive lawsuits. In this debate, I tend to side with the view expressed by Electronic Frontier Foundation legal director Corynne McSherry in the reading from Wired The Case that Never Ends: Oracle wins latest round vs. Google: “‘This creates a tremendous incentive for lawyers and copyright trolls to look for litigation.’”
Reading 12: Bitcoin / Self-Driving Cars
Reading 11: Artificial Intelligence
The definition of Artificial Intelligence varies depending on who is asked. IBM and Google would say they have created AI with machines like Watson and AlphaGo. On the other hand, some prominent thinkers in the AI world would argue these machines are not truly AI. It comes down to how AI is being defined.
One of the main points of difference in the conversation about AI is about the difference between 'Weak AI' and 'Strong AI':
Machines exhibiting Weak AI have the ability to simulate some human behaviors, for example, 'thinking' through strategic games and 'learning' from past decisions. Yet Weak AI generally have narrow expertise, and they don't exhibit human reasoning or cognition. They fail to make sense of world for themselves. This is where Strong AI comes in. The idea for Strong AI is that it would be able to exhibit human reasoning or cognition. It has not been created yet, and some AI theorists say it would be decades before anything resembling Strong AI is possible. Some consider Strong AI the next big advancement in the field of artificial intelligence.
Roger Schank, an American AI theorist and cognitive psychologist among other things, had this to say about Watson:
"Watson is not reasoning. You can only reason if you have goals, plans, ways of attaining them, a comprehension of the beliefs that others may have, and a knowledge of past experiences to reason from... Watson is a fraud".
In an article posted to his personal website, he essentially argues that Watson counts words and draws conclusions. Suggesting Watson is performing "cognitive computing" or could "out-think" cancer like IBM advertises, is a lie in the eyes of Schank.
Similarly, Jean-Christophe Baillie -- a French scientist and entrepreneur with specialties in robotics and linguistics -- wrote an article discussing why he believes Google's AlphaGo is a powerful computer, but not Strong AI. However, he goes on to say that advances like AlphaGo are good because they have potential applications in fields like medical research, industry, environmental preservation, and other areas.
What about the Turing Test - is it a valid measure of intelligence? Or is the Chinese Room a good counter argument? My immediate reaction is that the Chinese Room is a valid counter-argument to the Turing Test. There is a difference between simulating understanding of language via following a set of steps and rules, and an actual understanding of the words. The later of which seems to not yet have been created.
Because of this, I am not too concerned at the present about the development of artificial intelligence - at least with regards to some robot-army taking over the planet. I personally cannot imagine any machine ever being able to have the consciousness of a human or display true 'Strong AI'. That would mean humans having some God-like ability to create conscious thought.
I do, however, have concerns about possible application of the types of Weak AI that already exist. Machines like Watson, AlphaGo and Deep Blue are very powerful computers. My reaction to that is along the lines of my reactions to many other topics we have studied in this class: with great power comes great responsibility.
All of the new technologies being developed are both incredibly exciting and potentially great forces for good in the world, and they are also incredibly scary considering their potential uses for evil.
One of the main points of difference in the conversation about AI is about the difference between 'Weak AI' and 'Strong AI':
Machines exhibiting Weak AI have the ability to simulate some human behaviors, for example, 'thinking' through strategic games and 'learning' from past decisions. Yet Weak AI generally have narrow expertise, and they don't exhibit human reasoning or cognition. They fail to make sense of world for themselves. This is where Strong AI comes in. The idea for Strong AI is that it would be able to exhibit human reasoning or cognition. It has not been created yet, and some AI theorists say it would be decades before anything resembling Strong AI is possible. Some consider Strong AI the next big advancement in the field of artificial intelligence.
Roger Schank, an American AI theorist and cognitive psychologist among other things, had this to say about Watson:
"Watson is not reasoning. You can only reason if you have goals, plans, ways of attaining them, a comprehension of the beliefs that others may have, and a knowledge of past experiences to reason from... Watson is a fraud".
In an article posted to his personal website, he essentially argues that Watson counts words and draws conclusions. Suggesting Watson is performing "cognitive computing" or could "out-think" cancer like IBM advertises, is a lie in the eyes of Schank.
Similarly, Jean-Christophe Baillie -- a French scientist and entrepreneur with specialties in robotics and linguistics -- wrote an article discussing why he believes Google's AlphaGo is a powerful computer, but not Strong AI. However, he goes on to say that advances like AlphaGo are good because they have potential applications in fields like medical research, industry, environmental preservation, and other areas.
What about the Turing Test - is it a valid measure of intelligence? Or is the Chinese Room a good counter argument? My immediate reaction is that the Chinese Room is a valid counter-argument to the Turing Test. There is a difference between simulating understanding of language via following a set of steps and rules, and an actual understanding of the words. The later of which seems to not yet have been created.
Because of this, I am not too concerned at the present about the development of artificial intelligence - at least with regards to some robot-army taking over the planet. I personally cannot imagine any machine ever being able to have the consciousness of a human or display true 'Strong AI'. That would mean humans having some God-like ability to create conscious thought.
I do, however, have concerns about possible application of the types of Weak AI that already exist. Machines like Watson, AlphaGo and Deep Blue are very powerful computers. My reaction to that is along the lines of my reactions to many other topics we have studied in this class: with great power comes great responsibility.
All of the new technologies being developed are both incredibly exciting and potentially great forces for good in the world, and they are also incredibly scary considering their potential uses for evil.
Reading 10: Fake News
Fake news is the dissemination of information, generally "news articles" that are contain little to no factual information and/or intentionally skew facts to mislead people.
When "fake news" started to get coverage during the 2016 US presidential election, I initially only found it to be annoying, and a slightly concerning that some people would believe some of that stuff in the first place, or not try to substantiate/validate some of this news before sharing it with their friends and family.
However, post-election, my opinions on this issue have become much stronger. From various of the readings for this week regarding fake news, it seems fair to say that fake news did influenced the 2016 presidential election. From the Russian "troll factories" such as the "Internet Research Agency" to Cambridge Analytica's exploitation of personal information gathered from Facebook, to teenagers in Macedonia profiting from creating fake news, it's clear that fake news was not only created, but believed by many American citizens. To what extent it really changed election results is still unclear, but to be fair, that would be a hard thing to measure.
Regardless of the effect of fake news on the election, the potential impacts of fake news has become a stark reality for our generation. It is scary, invasive, and could be very harmful to society.
Due to the potentially grave consequences of spreading false information branded as fact, I believe tech companies should try to suppress fake news. This doesn't necessarily mean censoring, but they should do what they can to alert their users when information being spread is potentially false. However, I have no problem with censoring "news" that is intentionally false and misleading, such as in the case of the teenagers from Macedonia.
When I consider fake news relating to my own Facebook feed, I want to say that the majority of the news is somewhat based in fact. At the very least, I hardly ever notice information that is overtly false. Often, I scroll across information that is questionable, but that's just the nature of the internet. It's always important to fact-check and substantiate claims before taking them to be true.
I am comfortable with a private entity classifying information as "fake" when they truly are fake. Calling something out for what it is is fine by me. Plus, they're private companies and have the right to do what they want in that respect. However, I think they should be careful in what they choose to censor. If they can't prove something wrong, I don't think they should be censoring it.
As far as my own use of social media, I do use Facebook, but I don't use Twitter. The extent to which I believe the news I get from Facebook depends on the source of the news (is it from a news source I know and trust?) and if I'm not familiar with the news source, I will consider who I know that shared it, liked it and/or commented on it. If it is a person I consider to be somewhat intellectual and have sound judgement, I am more likely to trust the source. Even in those cases though, I tend to cross-check those headlines with major, reputable news sources. I would say I'm fairly skeptical when it comes to what information and news I trust.
Especially from the econometrics class I'm taking this semester regarding research methods and statistics, I've become more wary of believing statistics used in articles even by major news sources.
So, does truth stand a chance in a world dominated by "Fake News"?
If Americans become aware that fake news exists, and how to filter out what is true and what is lies, truth still has a fighting chance.
When "fake news" started to get coverage during the 2016 US presidential election, I initially only found it to be annoying, and a slightly concerning that some people would believe some of that stuff in the first place, or not try to substantiate/validate some of this news before sharing it with their friends and family.
However, post-election, my opinions on this issue have become much stronger. From various of the readings for this week regarding fake news, it seems fair to say that fake news did influenced the 2016 presidential election. From the Russian "troll factories" such as the "Internet Research Agency" to Cambridge Analytica's exploitation of personal information gathered from Facebook, to teenagers in Macedonia profiting from creating fake news, it's clear that fake news was not only created, but believed by many American citizens. To what extent it really changed election results is still unclear, but to be fair, that would be a hard thing to measure.
Regardless of the effect of fake news on the election, the potential impacts of fake news has become a stark reality for our generation. It is scary, invasive, and could be very harmful to society.
Due to the potentially grave consequences of spreading false information branded as fact, I believe tech companies should try to suppress fake news. This doesn't necessarily mean censoring, but they should do what they can to alert their users when information being spread is potentially false. However, I have no problem with censoring "news" that is intentionally false and misleading, such as in the case of the teenagers from Macedonia.
When I consider fake news relating to my own Facebook feed, I want to say that the majority of the news is somewhat based in fact. At the very least, I hardly ever notice information that is overtly false. Often, I scroll across information that is questionable, but that's just the nature of the internet. It's always important to fact-check and substantiate claims before taking them to be true.
I am comfortable with a private entity classifying information as "fake" when they truly are fake. Calling something out for what it is is fine by me. Plus, they're private companies and have the right to do what they want in that respect. However, I think they should be careful in what they choose to censor. If they can't prove something wrong, I don't think they should be censoring it.
As far as my own use of social media, I do use Facebook, but I don't use Twitter. The extent to which I believe the news I get from Facebook depends on the source of the news (is it from a news source I know and trust?) and if I'm not familiar with the news source, I will consider who I know that shared it, liked it and/or commented on it. If it is a person I consider to be somewhat intellectual and have sound judgement, I am more likely to trust the source. Even in those cases though, I tend to cross-check those headlines with major, reputable news sources. I would say I'm fairly skeptical when it comes to what information and news I trust.
Especially from the econometrics class I'm taking this semester regarding research methods and statistics, I've become more wary of believing statistics used in articles even by major news sources.
So, does truth stand a chance in a world dominated by "Fake News"?
If Americans become aware that fake news exists, and how to filter out what is true and what is lies, truth still has a fighting chance.
Reading 09: Net Neutrality
Net Neutrality is the concept that the government should enforce that internet service providers (such as AT&T, Verizon and Comcast) "treat all data on the internet the same" - meaning that they do not charge more for or alter the quality of service depending on the site, the user, the content, etc.
Net neutrality legislation favors a reclassification of internet service providers (ISP's) "common carriers", meaning they would be treated the same as other public utilities, such as gas, water and electric.
The main arguments for net neutrality have to do with rights and freedom relating to the internet, concern of control of data and information, and concern for a lessening of competition and innovation.
The internet has a history of being an open and free source of information. Net neutrality laws are a way of keeping the internet open and free to all users, as well as open to competition and innovation. The ethos of the internet comes from a similar place as the hacker ethos discussed earlier in the semester -- free, open, meritocratic, not hindered by regulation, bureaucracy and red tape. It can be argued that the repeal of net neutrality is a threat to free speech and to democracy.
Without net neutrality laws, cable and telecommunications companies would be able to control access to websites and loading times, making them 'gatekeepers' of the internet in a sense. This could have serious consequences for competition, growth and innovation. Network owners would be able to block or slow access to their competition's sites.
Some of the arguments against net neutrality include:
1) Reduction in investment of network infrastructure by telecom companies. Since they wouldn't be allowed to competitively price their services (for example, charging online companies like Netflix more to transfer their data faster than others), they wouldn't make more money to spend on more network infrastructure or recoup costs of building infrastructure.
2) There currently exists "enough" competition among ISP's, and therefore there is no reason to regulate ISP's in terms of monopolization. Research done by Nobel Prize economist Gary Becker and his colleagues found, "there is significant and growing competition among broadband access providers and that few significant competitive problems have been observed to date, suggesting that there is no compelling competitive rationale for such regulation".
3) In the article, Am I the only techie against net neutrality?, Josh Steimle argues that while good-intentioned, net neutrality laws are just more government regulation which will ultimately hinder competition and possibly make our information more vulnerable to government interference.
4) An article on Being Libertarian, written by Thomas Eckert suggests that ISP's should be able to charge more to larger companies which use more of their supply. He argues it is analogous to charging smaller cars less on toll roads than 18-wheelers.
After reading about the various positions on net neutrality, I'm honestly just more confused about my stance on it.
On one hand, net neutrality laws are adding more government regulation. However, it is regulation to de-regulate.
I fully believe that the internet should be free, open and treat all data the same because that makes for a better democracy. However, I also find compelling the argument that ISP's should be able to charge more to online companies which use their services more than others. Economically speaking, ISP's are private businesses and they exist to make revenue. Like any other business, they should be able to make competitive decisions to optimize their utility. That is a foundational aspect of capitalism.
Marc Andreesson made the point that large telecommunication companies spend around $20 billion a year on capital to be able to provide their services. They need to be able to get a "return on their investment" for their business to be profitable and continue to run. Pure net neutrality rules would make it so that these companies have difficulty recouping the costs they incur in building and maintaining infrastructure for their services. Less adequate financial gains, the companies have no incentive to improve their services or compete.
So in a way, net neutrality could reduce competition.
However, I think that the freedom of the internet is a foundational aspect of modern democratic society. ISP's shouldn't be allowed to make their competitors sites inaccessible or slower to load - that would also severely impinge on competition.
Ultimately, I would err on the side of being in favor of net neutrality because I fear the potential consequences of limiting free speech and innovation more than possible consequences to competition and innovation among large ISP companies.
Reading 07: The Internet of Things
The Internet of Things refers to physical devices or objects which use software to connect to the internet to exchange data. There are various reasons for the development and growth of the IoT. One simple reason is makes way for cool, new ways to interact with the things we have. For example, smart home appliances, driverless cars, and wearable technology. There have also been more significant uses beyond just ease or novelty - for example, uses in medical devices, manufacturing and infrastructure management like bridges, railways and wind farms.
This development of the Internet of Things is economically significant because there is great potential to increase efficiency - both efficiency of capital (think: more efficient, "smart" wind turbines which are part of the IoT) and efficiency of labor (think: wind turbine employees are immediately notified when a turbine is broken and can immediately fix it, instead of physically going out to each to check). Increased efficiency means increased productivity which means an increase in GDP. Additionally, increases in productivity are the only way to have sustained, long-term growth. Essentially, the possibilities brought about by the IoT are a big deal economically.
The main argument against the IoT is the potential for security threats and vulnerabilities. If an object is connected to the internet, there exists the possibility of it being exploited. One article, for example, recounted the recall of 465,000 pacemakers because one security analyst was able to reverse engineer a pacemaker to deliver 830 volt shocks. Another article discussed the potential vulnerabilities of smart cars. Engineers involved with the development of smart cars have tested the vehicles' security to make sure hackers aren't able to access some of the car's functions like climate control, windshield wipers and even the transmission.
Another article pointed out that a secure system is important even for the simplest of devices on the IoT. The example is given of a tablet in hotel rooms that would serve as an alarm, a remote control, and a way to connect to the hotel services such as the front desk and room service. At first thought, encryption and authentification for these services doesn't seem important, since the device would only perform rather trivial actions. However, the author gives an example of how an unencrypted tablet of that nature could wreak havoc on a hotel company: were a hacker to access the devices with malintent, they could, say, order a bottle of champagne to everyone in the hotel, then the guests could sue the hotel for getting charged those bills.
And while that example seems rather trivial-- who would hack a system just to do something like that anyways? -- it opens up the idea of what else could be done with the millions (billions?) of devices connected to the Internet of Things if anyone is able to access and exchange data with them.
As long as there are people who do bad things on this planet (and there always will be), there is always the potential for hacking that is detrimental to society. I think that programmers have to be liable for the code they create. The possibility of breaches and hacks are inherent to the field, so at the least, no programmer can claim being 'unaware' of the threats that exist to devices connected to the internet.
If there are breaches or hacks happen, the first step is to ask, were the devices (was the device) protected as well as possible at the time of the security breach? Simple steps like encryption, authentification, secured systems, etc. should be taken when creating a device part of the IoT.
One thing is sure: there needs to be some regulation, guidance, and legal framework set in place. In order for issues of IoT security to be solved, there needs to be a code or precedent that programmers, developers and manufacturers can look to when creating devices connected to the rapidly growing IoT.
This development of the Internet of Things is economically significant because there is great potential to increase efficiency - both efficiency of capital (think: more efficient, "smart" wind turbines which are part of the IoT) and efficiency of labor (think: wind turbine employees are immediately notified when a turbine is broken and can immediately fix it, instead of physically going out to each to check). Increased efficiency means increased productivity which means an increase in GDP. Additionally, increases in productivity are the only way to have sustained, long-term growth. Essentially, the possibilities brought about by the IoT are a big deal economically.
The main argument against the IoT is the potential for security threats and vulnerabilities. If an object is connected to the internet, there exists the possibility of it being exploited. One article, for example, recounted the recall of 465,000 pacemakers because one security analyst was able to reverse engineer a pacemaker to deliver 830 volt shocks. Another article discussed the potential vulnerabilities of smart cars. Engineers involved with the development of smart cars have tested the vehicles' security to make sure hackers aren't able to access some of the car's functions like climate control, windshield wipers and even the transmission.
Another article pointed out that a secure system is important even for the simplest of devices on the IoT. The example is given of a tablet in hotel rooms that would serve as an alarm, a remote control, and a way to connect to the hotel services such as the front desk and room service. At first thought, encryption and authentification for these services doesn't seem important, since the device would only perform rather trivial actions. However, the author gives an example of how an unencrypted tablet of that nature could wreak havoc on a hotel company: were a hacker to access the devices with malintent, they could, say, order a bottle of champagne to everyone in the hotel, then the guests could sue the hotel for getting charged those bills.
And while that example seems rather trivial-- who would hack a system just to do something like that anyways? -- it opens up the idea of what else could be done with the millions (billions?) of devices connected to the Internet of Things if anyone is able to access and exchange data with them.
As long as there are people who do bad things on this planet (and there always will be), there is always the potential for hacking that is detrimental to society. I think that programmers have to be liable for the code they create. The possibility of breaches and hacks are inherent to the field, so at the least, no programmer can claim being 'unaware' of the threats that exist to devices connected to the internet.
If there are breaches or hacks happen, the first step is to ask, were the devices (was the device) protected as well as possible at the time of the security breach? Simple steps like encryption, authentification, secured systems, etc. should be taken when creating a device part of the IoT.
One thing is sure: there needs to be some regulation, guidance, and legal framework set in place. In order for issues of IoT security to be solved, there needs to be a code or precedent that programmers, developers and manufacturers can look to when creating devices connected to the rapidly growing IoT.
Project 2: Hidden Figures Podcast - Personal Reflection
The challenges that women and minorities face start in childhood; the opportunities and expectations of children of marginalized groups differ. For example, Katherine Johnson's family had to move various times to be able to send their kids to decent schools in K-12. Another challenge that starts young for these groups is the lack of role models similar to them.
The challenges they face continue with stereotyping and societal norms that they deal with day-to-day throughout their lives. These problems can be both very visible discrimination (example - Mary Jackson having to fight to be able to get her engineering degree), or less visible (many people in NASA & society doubting their ability to be good computers).
Part of the issue with challenges to women and minorities in STEM today is that the issues they grapple with are often less visible and compared to what the past has been like (examples - no or limited paid maternity leave, micro-aggression, dealing with stereotypes and external/internal doubt)
Though women and minorities in STEM face less challenges today as compared to the time period Hidden Figures was set in, there still significant changes that need to happen, mostly regarding the environment in the workplace. In 2017, a study was conducted by some researchers at Stanford which surveyed over 200 women in Silicon Valley who had 10 or more years of experience working in tech. Many women reported witnessing sexist behavior (90%), experiencing demeaning comments from male colleagues (87%), and various other negative experiences in the workplace.
I don't think that these issues are something society can't fix. Tech companies should be aware of the overt and covert discrimination happening within their field, and they should act to create better environments for all workers. They should respond to harassment complaints quickly and with due diligence. The more we accept these issues as an "inevitable reality", the longer we fail these groups of people.
The stories we tell are important because they shape our perceptions of the reality and our history, and they influence how we act. When important details of our history are left out, like the story in Hidden Figures, we end of having a false conception of reality - i.e. NASA's accomplishments were achieved purely by white males (which is false).
Growing up, my view of the STEM world was overwhelmingly male. It took me until college to realize the effect this had on me - I do think that I doubt my own abilities in STEM disciplines, partly because I didn't see many bold women in STEM when I was younger. I was never told the stories of the women who have revolutionized STEM. I had few role models to prove to me that women do belong in STEM as much as any man.
The challenges they face continue with stereotyping and societal norms that they deal with day-to-day throughout their lives. These problems can be both very visible discrimination (example - Mary Jackson having to fight to be able to get her engineering degree), or less visible (many people in NASA & society doubting their ability to be good computers).
Part of the issue with challenges to women and minorities in STEM today is that the issues they grapple with are often less visible and compared to what the past has been like (examples - no or limited paid maternity leave, micro-aggression, dealing with stereotypes and external/internal doubt)
Though women and minorities in STEM face less challenges today as compared to the time period Hidden Figures was set in, there still significant changes that need to happen, mostly regarding the environment in the workplace. In 2017, a study was conducted by some researchers at Stanford which surveyed over 200 women in Silicon Valley who had 10 or more years of experience working in tech. Many women reported witnessing sexist behavior (90%), experiencing demeaning comments from male colleagues (87%), and various other negative experiences in the workplace.
I don't think that these issues are something society can't fix. Tech companies should be aware of the overt and covert discrimination happening within their field, and they should act to create better environments for all workers. They should respond to harassment complaints quickly and with due diligence. The more we accept these issues as an "inevitable reality", the longer we fail these groups of people.
The stories we tell are important because they shape our perceptions of the reality and our history, and they influence how we act. When important details of our history are left out, like the story in Hidden Figures, we end of having a false conception of reality - i.e. NASA's accomplishments were achieved purely by white males (which is false).
Growing up, my view of the STEM world was overwhelmingly male. It took me until college to realize the effect this had on me - I do think that I doubt my own abilities in STEM disciplines, partly because I didn't see many bold women in STEM when I was younger. I was never told the stories of the women who have revolutionized STEM. I had few role models to prove to me that women do belong in STEM as much as any man.
Reading 05: Engineering Disasters and Whistleblowing - The case of the Challenger disaster
"There is a tendency in institutions of all kinds to avoid focusing on difficult problems until they explode into crisis and disaster"
Alex Pasternack, How Challenger Exploded, and Other Mistakes Were Made
While there seems to be a factors which led to the Challenger disaster, poor communication within institutions stands out as the root cause. The explosion was caused by the failure of a rubber O-ring, produced by a company apart from NASA. The engineers involved in creating the o-rings knew from tests that they didn't perform well at low temperatures, yet somehow, the gravity of that fact wasn't clear or at least it wasn't heeded by the team at NASA. Some argue that the data about O-ring failures at low-temperatures insufficiently described the risk in the data and charts passed from the manufacturing company to NASA. Though the information is there, the charts don't make it easy to see, and they certainly don't shout any warnings about serious and obvious failure when used at inappropriate temperatures. Hence, there was a failure in communication between engineers of two different parts (those who created the o-rings and those who made plans to use them).
There was also poor communication between the NASA project managers and Morton-Thiokol. On the morning of the launch, the team at Morton-Thiokol warned of risk due to the low temperatures, but the project managers brushed it off instead of heeding the warnings.
What stands out to me is that the people involved in this grave error should have been more actively concerned about risk and advocating for safety. There were seven lives hinging on the accuracy and safety standards of the engineers involved. The priority should have been safety- not a political timetable, not adding costs, concerns for NASA's reputation, but safety for the lives of seven human beings. If someone knew of a problem that they thought could cause the whole spacecraft to explode, that's not just cause to refuse to sign a document, that's cause to fight adamantly, loudly, and clearly that the mission needed to be stopped.
Was Roger Boisjoly ethical in sharing information with the public?
I read through the arguments against whistle-blowing and to be honest, I think they're all pretty selfish. They're generally concerning protection of a company or protection of the individual that could face consequences due to calling out their company. The latter I empathize with more, but the first I can't say I support.
I understand that there can be harm to the person doing the whistleblowing. Some situations are more dire than others, and therefore place more weight on people to call out bad practice when they see it - the Challenger disaster being a great example. I think that Roger Boisjoly was ethical in sharing the information with the public. There's no reason it should have been hid from the public eye. Certain people made mistakes, and when that happens, you have to take responsibility for the consequences. I think it was unjust that his company retaliated against him, because he shared the truth. It wasn't good for the company, obviously, but that's a consequence of Boisjoly's supervisor's lack of concern for known risk - his supervisor did approve the launch in the end. Not sharing the information may have made Boisjoly's life easier in the long-run and would have been better for Morton-Thiokol. However, I think it was the moral responsibility of the company to own up to their errors.
I understand why the company retaliated against Boisjoly - his whisleblowing was undoubtedly bad for the company. I'm sure it took a toll on their image, the confidence of their customers in their products, and ultimately their sales and therefore the lives of all the employees. No one would be happy about that, I get it. Yet I believe that the moral responsibility to own up to the consequences of one's actions takes priority over people's personal lives.
What good is whistleblowing if it destroys your career or your life? Well, it really depends on the scenario.
In extreme cases, like that of the Challenger, effective whistleblowing literally would have saved lives. In other cases, whistleblowing might not save anyone from direct harm. One should weigh the benefits and consequences. One should explore other, more collaborative paths for resolution if possible. And one should act before there's room for any real disaster.
Ultimately, engineers need to have an orientation towards safety. Their technical knowledge and roles within society can result in a responsibility for the general welfare of countless people. Everyday, everywhere, people rely on technology, transportation systems, security systems, etc. that are created by engineers. It may take whisteblowing to save some lives.
Alex Pasternack, How Challenger Exploded, and Other Mistakes Were Made
While there seems to be a factors which led to the Challenger disaster, poor communication within institutions stands out as the root cause. The explosion was caused by the failure of a rubber O-ring, produced by a company apart from NASA. The engineers involved in creating the o-rings knew from tests that they didn't perform well at low temperatures, yet somehow, the gravity of that fact wasn't clear or at least it wasn't heeded by the team at NASA. Some argue that the data about O-ring failures at low-temperatures insufficiently described the risk in the data and charts passed from the manufacturing company to NASA. Though the information is there, the charts don't make it easy to see, and they certainly don't shout any warnings about serious and obvious failure when used at inappropriate temperatures. Hence, there was a failure in communication between engineers of two different parts (those who created the o-rings and those who made plans to use them).
There was also poor communication between the NASA project managers and Morton-Thiokol. On the morning of the launch, the team at Morton-Thiokol warned of risk due to the low temperatures, but the project managers brushed it off instead of heeding the warnings.
What stands out to me is that the people involved in this grave error should have been more actively concerned about risk and advocating for safety. There were seven lives hinging on the accuracy and safety standards of the engineers involved. The priority should have been safety- not a political timetable, not adding costs, concerns for NASA's reputation, but safety for the lives of seven human beings. If someone knew of a problem that they thought could cause the whole spacecraft to explode, that's not just cause to refuse to sign a document, that's cause to fight adamantly, loudly, and clearly that the mission needed to be stopped.
Was Roger Boisjoly ethical in sharing information with the public?
I read through the arguments against whistle-blowing and to be honest, I think they're all pretty selfish. They're generally concerning protection of a company or protection of the individual that could face consequences due to calling out their company. The latter I empathize with more, but the first I can't say I support.
I understand that there can be harm to the person doing the whistleblowing. Some situations are more dire than others, and therefore place more weight on people to call out bad practice when they see it - the Challenger disaster being a great example. I think that Roger Boisjoly was ethical in sharing the information with the public. There's no reason it should have been hid from the public eye. Certain people made mistakes, and when that happens, you have to take responsibility for the consequences. I think it was unjust that his company retaliated against him, because he shared the truth. It wasn't good for the company, obviously, but that's a consequence of Boisjoly's supervisor's lack of concern for known risk - his supervisor did approve the launch in the end. Not sharing the information may have made Boisjoly's life easier in the long-run and would have been better for Morton-Thiokol. However, I think it was the moral responsibility of the company to own up to their errors.
I understand why the company retaliated against Boisjoly - his whisleblowing was undoubtedly bad for the company. I'm sure it took a toll on their image, the confidence of their customers in their products, and ultimately their sales and therefore the lives of all the employees. No one would be happy about that, I get it. Yet I believe that the moral responsibility to own up to the consequences of one's actions takes priority over people's personal lives.
What good is whistleblowing if it destroys your career or your life? Well, it really depends on the scenario.
In extreme cases, like that of the Challenger, effective whistleblowing literally would have saved lives. In other cases, whistleblowing might not save anyone from direct harm. One should weigh the benefits and consequences. One should explore other, more collaborative paths for resolution if possible. And one should act before there's room for any real disaster.
Ultimately, engineers need to have an orientation towards safety. Their technical knowledge and roles within society can result in a responsibility for the general welfare of countless people. Everyday, everywhere, people rely on technology, transportation systems, security systems, etc. that are created by engineers. It may take whisteblowing to save some lives.
Reading 04: Diversity
**As a preface, I'm going to mostly discuss this issue of lack of women in tech as it is one that pertains to myself and something I feel strongly about. I acknowledge this is only one of many issues with diversity in tech.**
I have some strong feelings about this issue. I was one part of the statistics of women dropping out of STEM - my freshman year I dropped out of Engineering and switched to Economics (which actually includes a considerable amount of math, but still). I had a lot of doubts about whether I could make it through four years of engineering. Two years later, I've reflected on my decision, and I think that engineering wasn't the right choice for me for a variety of reasons. Apart from that though, I definitely grappled with feelings of inability, insecurity and lack of self-confidence in my abilities.
I'm sure those are feelings that my male counterparts experienced as well as we dealt with the weeder classes of freshman year engineering and adjusting to Notre Dame workloads and expectations. However, to this day I feel that the stereotype of men in STEM was something that had a negative effect on me. Sometimes when facing a great challenge, the doubt of others or cultural norms can be enough to make you think 'maybe I just really can't be good at this'. And that is something that I do think is a real factor. When tech culture continuously puts out a very specific image of who programmers, developers and 'geniuses' are--an image that is overwhelmingly male, among other traits--it can make it difficult for people who don't fit that image to imagine themselves in that role. Role models are important.
So I think that one issue that persists regarding diversity is the stereotyping and heterogeneous culture which persists.
I think that one powerful way to think about the issue of diversity comes in imagining alternate realities. Reality: upwards of 84% of undergraduates in America who major in Computer Science are men. Alternate reality: what if women were the majority in computer science fields, like at Harvey-Mudd? The fact that more than half of their CompSci undergraduates are women was shocking to me. And the fact that that is shocking is the really messed-up part.
The article about Harvey-Mudd made another good point - when women and other minorities miss out on Computer Science, they're also missing out on earning potential. The author Rosanna Xia quotes Ran Libeskind-Hadas, one of the computer science professors at Harvey Mudd -- "Companies are offering six-figure salaries with good benefits to 22-year-olds. For young women not to be able to be part of that economy is just a failing on the part of society."
So why exactly are there so many more women studying Computer Science at Harvey-Mudd? They took intentional steps to change the culture. The college's president, a computer-scientist herself, stated, "'Building confidence and a sense of belonging and a sense of community among these women makes such a huge difference. Once you change the myths and the cultural beliefs about computer science, that has a lot of momentum.'" Within four years of their experimental program, the percentage of women in computer science more than tripled. The Vox article regarding the lack of women in tech cited a study in which they surveyed women in tech: 90% had reported demeaning comments from male colleagues. That is not building a good environment.
So no, I don't think that the lack of diversity in tech is just a "possibly unfortunate" reality. Moreover, I think that thinking a lazy excuse not to make efforts to change the status quo. I might have switched out of engineering, but that was because it was the right thing for me. I look around Notre Dame, though, and I look around this class, and I see plenty of competent, powerful, brilliant women. I hope that the current cultural and societal shifts move us to recognize the ability we have, instead of our potential being squandered by cultural constructs.
I have some strong feelings about this issue. I was one part of the statistics of women dropping out of STEM - my freshman year I dropped out of Engineering and switched to Economics (which actually includes a considerable amount of math, but still). I had a lot of doubts about whether I could make it through four years of engineering. Two years later, I've reflected on my decision, and I think that engineering wasn't the right choice for me for a variety of reasons. Apart from that though, I definitely grappled with feelings of inability, insecurity and lack of self-confidence in my abilities.
I'm sure those are feelings that my male counterparts experienced as well as we dealt with the weeder classes of freshman year engineering and adjusting to Notre Dame workloads and expectations. However, to this day I feel that the stereotype of men in STEM was something that had a negative effect on me. Sometimes when facing a great challenge, the doubt of others or cultural norms can be enough to make you think 'maybe I just really can't be good at this'. And that is something that I do think is a real factor. When tech culture continuously puts out a very specific image of who programmers, developers and 'geniuses' are--an image that is overwhelmingly male, among other traits--it can make it difficult for people who don't fit that image to imagine themselves in that role. Role models are important.
So I think that one issue that persists regarding diversity is the stereotyping and heterogeneous culture which persists.
I think that one powerful way to think about the issue of diversity comes in imagining alternate realities. Reality: upwards of 84% of undergraduates in America who major in Computer Science are men. Alternate reality: what if women were the majority in computer science fields, like at Harvey-Mudd? The fact that more than half of their CompSci undergraduates are women was shocking to me. And the fact that that is shocking is the really messed-up part.
The article about Harvey-Mudd made another good point - when women and other minorities miss out on Computer Science, they're also missing out on earning potential. The author Rosanna Xia quotes Ran Libeskind-Hadas, one of the computer science professors at Harvey Mudd -- "Companies are offering six-figure salaries with good benefits to 22-year-olds. For young women not to be able to be part of that economy is just a failing on the part of society."
So why exactly are there so many more women studying Computer Science at Harvey-Mudd? They took intentional steps to change the culture. The college's president, a computer-scientist herself, stated, "'Building confidence and a sense of belonging and a sense of community among these women makes such a huge difference. Once you change the myths and the cultural beliefs about computer science, that has a lot of momentum.'" Within four years of their experimental program, the percentage of women in computer science more than tripled. The Vox article regarding the lack of women in tech cited a study in which they surveyed women in tech: 90% had reported demeaning comments from male colleagues. That is not building a good environment.
So no, I don't think that the lack of diversity in tech is just a "possibly unfortunate" reality. Moreover, I think that thinking a lazy excuse not to make efforts to change the status quo. I might have switched out of engineering, but that was because it was the right thing for me. I look around Notre Dame, though, and I look around this class, and I see plenty of competent, powerful, brilliant women. I hope that the current cultural and societal shifts move us to recognize the ability we have, instead of our potential being squandered by cultural constructs.
Reading 03: Work-life Balance
Can parents have successful and fulfilling careers while also raising a family and meeting other non-work related goals?
It's funny that this is the one of the topics this week, because this past weekend I visited two close friends who are Notre Dame alumni now living and working in Chicago, and we had some discussions about this. What I took away from our conversations is that success, fulfillment, meaning and happiness are different for each person. A lot of people may share similar ideas, but every person has their own definitions of those ideas for themselves. For one person, being successful and fulfilled may mean raising kids and giving them all the opportunities they hope to give them. For another person, it might be becoming the head of a company or a respected professional in their community, or for another it might be reaching their life goals that don't relate to their job or a family at all.
I think that "having it all" can be a misleading phrase. It seems to imply that you can have a perfect work-life balance, and also that there's a particular way to achieve that, or that it's a clear-cut benchmark. I think the reality of balancing work, family life and non-work related goals/activities is that it's messy, and you may have to make sacrifices when choosing how to allocate your time.
Finding the balance between work and family life may be inherently more difficult for women. For example, Julia Cheiffetz related her difficulty of becoming a mother while having a big role at Amazon in a post titled, I had a Baby and Cancer When I Worked at Amazon. This is My Story. Granted, Julia was dealing with more than just giving birth and taking care of a new-born, but this still stands as a good example of the trouble women face in maintaining their status as professionals throughout the motherhood process. She took only a five-month leave for both her maternity leave and for cancer therapy, yet when she returned, she found she that she wasn't exactly getting her position back, and that Amazon had placed her on a 'plan' which usually signals someone is at risk for unemployment.
I get that business doesn't stop when a woman employee goes on maternity leave. But 'finding a balance' for women shouldn't mean having to decide between their work or having children. Similar for men - I believe in paid paternity leave. Men should be just as integral in taking care of and bonding with their newborn children.
In Maybe We All need a Little Less Balance, Brad Stolberg makes an interesting proposal. He postulates that we 'balance' isn't the key to happiness, success or fulfillment, but that we are best able to achieve when we dedicate ourselves to one thing, whether that be family, work, or personal project. This has some legitimacy I think. When you think of people who are particularly successful at any one thing, it is usually because they have spent a lot of time on that thing. Professional athletes, CEO's, great researchers, anyone at the top of their field - they get there by putting in hours on hours of work which naturally means they have less time for other things. Brad makes a good point. Yet, I think this is just another way that some people have found meaning in their life. Using myself as an example: I love to climb. If I could, I would probably spend a year, maybe two, dedicated to climbing around. But five, ten, twenty, forty years dedicated to climbing? I would probably become pretty damn good at it. Yet I don't think I would exactly be fulfilled by my definition in the end.
When a person dedicates large amounts of time to one particular life objective, there's also the issue of burnout. Alina Diznik in The Strange Psychology of Stress and Burnout discusses the benefits of some levels of stress, but the 'burnout' effect of chronic, long-term stress. Some other articles gave examples of what how burnout plays out in real-life, like the NYT exposé of Amazon in which some workers related their resignations due to being over-worked.
I've learned from being at Notre Dame that different people handle stress differently, and that some can handle more than others, or handle it better than others. I think it's important for people to know their own limits and where they draw the line when it come to how they let their work influence their lives.
I know that for myself, finding a healthy balance between work and my other priorities is very important. I don't like to sacrifice my well-being for the sake of academic or career-related success. I will always put myself first.
Ultimately, people are different. They want different things and they handle emotions and situations differently. It's up to each person to determine what want to get out of life.
It's funny that this is the one of the topics this week, because this past weekend I visited two close friends who are Notre Dame alumni now living and working in Chicago, and we had some discussions about this. What I took away from our conversations is that success, fulfillment, meaning and happiness are different for each person. A lot of people may share similar ideas, but every person has their own definitions of those ideas for themselves. For one person, being successful and fulfilled may mean raising kids and giving them all the opportunities they hope to give them. For another person, it might be becoming the head of a company or a respected professional in their community, or for another it might be reaching their life goals that don't relate to their job or a family at all.
I think that "having it all" can be a misleading phrase. It seems to imply that you can have a perfect work-life balance, and also that there's a particular way to achieve that, or that it's a clear-cut benchmark. I think the reality of balancing work, family life and non-work related goals/activities is that it's messy, and you may have to make sacrifices when choosing how to allocate your time.
Finding the balance between work and family life may be inherently more difficult for women. For example, Julia Cheiffetz related her difficulty of becoming a mother while having a big role at Amazon in a post titled, I had a Baby and Cancer When I Worked at Amazon. This is My Story. Granted, Julia was dealing with more than just giving birth and taking care of a new-born, but this still stands as a good example of the trouble women face in maintaining their status as professionals throughout the motherhood process. She took only a five-month leave for both her maternity leave and for cancer therapy, yet when she returned, she found she that she wasn't exactly getting her position back, and that Amazon had placed her on a 'plan' which usually signals someone is at risk for unemployment.
I get that business doesn't stop when a woman employee goes on maternity leave. But 'finding a balance' for women shouldn't mean having to decide between their work or having children. Similar for men - I believe in paid paternity leave. Men should be just as integral in taking care of and bonding with their newborn children.
In Maybe We All need a Little Less Balance, Brad Stolberg makes an interesting proposal. He postulates that we 'balance' isn't the key to happiness, success or fulfillment, but that we are best able to achieve when we dedicate ourselves to one thing, whether that be family, work, or personal project. This has some legitimacy I think. When you think of people who are particularly successful at any one thing, it is usually because they have spent a lot of time on that thing. Professional athletes, CEO's, great researchers, anyone at the top of their field - they get there by putting in hours on hours of work which naturally means they have less time for other things. Brad makes a good point. Yet, I think this is just another way that some people have found meaning in their life. Using myself as an example: I love to climb. If I could, I would probably spend a year, maybe two, dedicated to climbing around. But five, ten, twenty, forty years dedicated to climbing? I would probably become pretty damn good at it. Yet I don't think I would exactly be fulfilled by my definition in the end.
When a person dedicates large amounts of time to one particular life objective, there's also the issue of burnout. Alina Diznik in The Strange Psychology of Stress and Burnout discusses the benefits of some levels of stress, but the 'burnout' effect of chronic, long-term stress. Some other articles gave examples of what how burnout plays out in real-life, like the NYT exposé of Amazon in which some workers related their resignations due to being over-worked.
I've learned from being at Notre Dame that different people handle stress differently, and that some can handle more than others, or handle it better than others. I think it's important for people to know their own limits and where they draw the line when it come to how they let their work influence their lives.
I know that for myself, finding a healthy balance between work and my other priorities is very important. I don't like to sacrifice my well-being for the sake of academic or career-related success. I will always put myself first.
Ultimately, people are different. They want different things and they handle emotions and situations differently. It's up to each person to determine what want to get out of life.
Project 01: Personal Reflection
Some highlights from the code of ethics we made are the tenants of integrity, creating community and acknowledging one's role in it, considering the impacts of one's actions, and striving to do good with the talents one possesses. These all seem like obviously good things. It was especially important to me to include the clauses about community because it may be a more "soft issue" than a technical one or a basic moral principle, yet I think it is just as important. Diversity is being stressed a lot in the tech sphere, but it doesn't happen on its own. People need to make conscious decisions and actions to create inclusive communities.
The document isn't perfect, obviously. A more comprehensive code of ethics could be very lengthy. I wouldn't be surprised if Du Lac is around one hundred pages. I think that the difficulty with creating codes of ethics is that truly comprehensive codes of ethics would be incredibly lengthy, painstaking to create, and difficult to define detailed boundaries. Difficulty falls in draws lines in grey areas. If you were to make very specific ethical codes, judgement calls might be made that not everyone agrees with.
I think that making a good, more comprehensive code of ethics would require a conference-type event where many people from many areas of the computer science and engineering realm get together and try to come to consensus in some key areas where ethical guidelines are important (areas including many of the topics covered in this class, such as privacy and security, AI, and net neutrality).
I think that Codes of Ethics are useful if they're well written and if they're known as the industry standard. They could be helpful when there are disputes within the field as to whether some person or company has violated an important ethical standard. It is also important that they exist because they are a manifestation of the fact that people (especially professionals in the computer science field because they potentially have far-reaching influences and effects) can't just do whatever they please. It's imperative that they're aware of impacts of their actions.
The document isn't perfect, obviously. A more comprehensive code of ethics could be very lengthy. I wouldn't be surprised if Du Lac is around one hundred pages. I think that the difficulty with creating codes of ethics is that truly comprehensive codes of ethics would be incredibly lengthy, painstaking to create, and difficult to define detailed boundaries. Difficulty falls in draws lines in grey areas. If you were to make very specific ethical codes, judgement calls might be made that not everyone agrees with.
I think that making a good, more comprehensive code of ethics would require a conference-type event where many people from many areas of the computer science and engineering realm get together and try to come to consensus in some key areas where ethical guidelines are important (areas including many of the topics covered in this class, such as privacy and security, AI, and net neutrality).
I think that Codes of Ethics are useful if they're well written and if they're known as the industry standard. They could be helpful when there are disputes within the field as to whether some person or company has violated an important ethical standard. It is also important that they exist because they are a manifestation of the fact that people (especially professionals in the computer science field because they potentially have far-reaching influences and effects) can't just do whatever they please. It's imperative that they're aware of impacts of their actions.
Project 01: Code of Ethics
Group members: Grace Bushong, Mabelle Wongsanguan, Molly Smith
Code of Ethics
for
University of Notre Dame Computer Science and Engineering Students
2018
for
University of Notre Dame Computer Science and Engineering Students
2018
When studying Computer Science and Engineering at Notre Dame, students are expected to live by the following ethical standards.
Integrity
Responsibilities
The above principles are a moral framework intended to create an environment which is beneficial and just to all students in the Computer Science and Engineering department, and to the community at large. When we are intentional about our actions, we create a better place for everyone to learn, grow, innovate and create.
Integrity
- Be honest
- About the work you do for classes.
- Don’t cheat on exams.
- Don’t copy or cheat on assignments.
- Don’t use others’ work without citing it, or claim ownership of another person’s work.
- About yourself to potential employers.
- Be honest about your knowledge and qualifications.
- Be honest about your job search status (if you have other offers, etc.).
- Use open source alternatives rather than pirating software. Support the creators of programs you like by paying for their software!
- About the work you do for classes.
- Be a good fellow student.
- Try to help other students, especially if you notice them struggling.
- Be a good team member in group projects. Do your fair share of the work.
- Be humble.
- Acknowledge other people's’ ideas as their own and give them credit.
- Respect people who disagree with you and don’t judge people based on superficial qualities. Even if it’s something dumb like which text editor you use!
Responsibilities
- Use abilities for good. Tech is a high-paying industry, and that’s great but use your skills to create products that will also improve people’s lives.
- Consider the direct and possible indirect impacts of the code you write; be critical and question the effects of the technology you create or help create.
- Don’t write malicious code, obviously.
- Be intentional in creating a welcoming, supportive, inclusive environment. Understand your role in influencing the culture as a member of the community.
- Respect differences in preference. Don’t belittle people who have different preferences than yourself (example: Vi(m) versus Emacs). Respect your peers.
- Recognize the opportunities you’ve had, and recognize that not all people (or many) have had the same opportunities.
- It’s hard to always be motivated but important to fully take advantage of your opportunities!
- Give back.
The above principles are a moral framework intended to create an environment which is beneficial and just to all students in the Computer Science and Engineering department, and to the community at large. When we are intentional about our actions, we create a better place for everyone to learn, grow, innovate and create.
Reading 02: Job Mobility, Hiring and my future!
Where do I see my career headed? Do I plan on staying with one company or do I envision moving from job to job?
These are timely questions considering I’m in the midst of finding a job for this summer and consequently trying to sort out some questions about my longer-term future.
At this point, I find it more likely that I would change jobs after every couple years. I remain fairly unsure about what I want to do for work. It seems like every few months I imagine myself in a different career. However, there are some things I’m sure about that lead me to believe I’ll be changing jobs faily often during the first ten or so years post-undergraduate:
I don’t rule out the possibility that I could really enjoy my first or second job and that I might want to stay at whatever place that is. I would be surprised if that happened, but I guess it’s possible.
Is there such a thing as company loyalty? Should you be loyal to your company, or should your company be loyal to you?
As someone seeking a job, I pay a lot of attention to the apparent work environment of companies I’m considering. After all, I would end up spending a lot of time in these places. I’ve looked at lists of “best places to work” or “best companies to work at”. One way to think about the relationship between companies like these and their employees is that, in a way, the companies treat their employees as clients. They care that they create a good employee experience and work environment.
That approach is what I see as the peak of company loyalty to its employees; they follow through on their promises, and they strive to create a healthy, safe, inclusive, and productive work environment.
I’ve had an experience working for a company where the company didn’t follow through on their contract and I ended up feeling a sense of disloyalty from the company towards me. Two summers ago, I interned at a summer camp. The contract I signed before starting stated the amount of hours and weeks they had me commit to work. Later that summer, they realized they over-hired, so they cut the hours of several employees, including myself. I needed those hours though—I was paying rent in Chicago and I had planned on my earnings to be able to pay rent and buy food, etc. But the contract I signed clearly stated that I was to work a minimum of x hours that summer. I ended up having to go over my supervisor to talk with the headquarters. Their human resources department conceded that they weren’t adhering to the contract and ended up reimbursing me for lost hours.
Having that experience, companies being loyal to their employees is something really important to me now. Companies should follow through with promises they make to their employees. Likewise, employees should follow through with their end of the contracts.
Non-compete clauses are another issue, however. I don’t think that noncompete clauses should be used in the quantity they currently are. According to the article by Daniel Wiessner, in 2016 about one in five employees were bound by some sort of non-compete clause. The article, titled, “White House urges ban on non-compete agreements for many workers”, relates the struggle of many blue-collar workers whose job contracts containing noncompete clauses hindered them from getting better jobs or even a job at all after leaving their employers. Another article by Conor Dougherty, “How Noncompete clauses keep workers locked in”, discusses the economic downfalls of noncompetes. Just as the name would suggest, they limit competition - making it hard for a capitalist economy to correctly function. The article said that one economics professor at Princeton labeled non-compete clauses “outright collusion” and “part of a rigged labor market”.
While I believe that employees have an obligation to be loyal to their companies in some ways, I don’t think that so many employees should be bound by non-compete clauses. It seems that those clauses are mostly needed in cases of protection of ‘trade secrets’, which should really only involve small numbers of employees, not one-fifth of employees in America.
I think that the ethics of ‘job-hopping’ is more situational. If someone hasn’t had a good experience at a certain company, or they decide they want to go a different route, or they think they’re grown out of the job— in situations like those, I don’t think you can fault the employee for wanting to change. In my opinion, as long as they stay for the time they signed for and give their employers a reasonable notice before leaving, it is ethical to change jobs as often as desired.
In fact, some people see job-hopping as advantageous, and not just for the employees doing it, but for the companies that hire them. The article by Vivian Giang, “You should plan on switching jobs every three years for the rest of your life” postures that changing jobs often leads to high learning rates and employees being more engaged in their work. I don’t necessarily agree with the latter part about “being more engaged” because I think that more time in a job doesn’t necessarily mean being less engaged. However I do think that changing jobs every few years can be beneficial by stimulating learning and bringing in people with fresh perspectives.
That’s part of my hope for myself if I do end up changing jobs a lot during my twenties -- that at the very least, I’m learning a lot.
These are timely questions considering I’m in the midst of finding a job for this summer and consequently trying to sort out some questions about my longer-term future.
At this point, I find it more likely that I would change jobs after every couple years. I remain fairly unsure about what I want to do for work. It seems like every few months I imagine myself in a different career. However, there are some things I’m sure about that lead me to believe I’ll be changing jobs faily often during the first ten or so years post-undergraduate:
- I love to learn new things.
- I love learning new things because it changes how I think and my perspective of the world.
- I like to be challenged.
- I like to meet new people.
- I like to experience living in different places.
I don’t rule out the possibility that I could really enjoy my first or second job and that I might want to stay at whatever place that is. I would be surprised if that happened, but I guess it’s possible.
Is there such a thing as company loyalty? Should you be loyal to your company, or should your company be loyal to you?
As someone seeking a job, I pay a lot of attention to the apparent work environment of companies I’m considering. After all, I would end up spending a lot of time in these places. I’ve looked at lists of “best places to work” or “best companies to work at”. One way to think about the relationship between companies like these and their employees is that, in a way, the companies treat their employees as clients. They care that they create a good employee experience and work environment.
That approach is what I see as the peak of company loyalty to its employees; they follow through on their promises, and they strive to create a healthy, safe, inclusive, and productive work environment.
I’ve had an experience working for a company where the company didn’t follow through on their contract and I ended up feeling a sense of disloyalty from the company towards me. Two summers ago, I interned at a summer camp. The contract I signed before starting stated the amount of hours and weeks they had me commit to work. Later that summer, they realized they over-hired, so they cut the hours of several employees, including myself. I needed those hours though—I was paying rent in Chicago and I had planned on my earnings to be able to pay rent and buy food, etc. But the contract I signed clearly stated that I was to work a minimum of x hours that summer. I ended up having to go over my supervisor to talk with the headquarters. Their human resources department conceded that they weren’t adhering to the contract and ended up reimbursing me for lost hours.
Having that experience, companies being loyal to their employees is something really important to me now. Companies should follow through with promises they make to their employees. Likewise, employees should follow through with their end of the contracts.
Non-compete clauses are another issue, however. I don’t think that noncompete clauses should be used in the quantity they currently are. According to the article by Daniel Wiessner, in 2016 about one in five employees were bound by some sort of non-compete clause. The article, titled, “White House urges ban on non-compete agreements for many workers”, relates the struggle of many blue-collar workers whose job contracts containing noncompete clauses hindered them from getting better jobs or even a job at all after leaving their employers. Another article by Conor Dougherty, “How Noncompete clauses keep workers locked in”, discusses the economic downfalls of noncompetes. Just as the name would suggest, they limit competition - making it hard for a capitalist economy to correctly function. The article said that one economics professor at Princeton labeled non-compete clauses “outright collusion” and “part of a rigged labor market”.
While I believe that employees have an obligation to be loyal to their companies in some ways, I don’t think that so many employees should be bound by non-compete clauses. It seems that those clauses are mostly needed in cases of protection of ‘trade secrets’, which should really only involve small numbers of employees, not one-fifth of employees in America.
I think that the ethics of ‘job-hopping’ is more situational. If someone hasn’t had a good experience at a certain company, or they decide they want to go a different route, or they think they’re grown out of the job— in situations like those, I don’t think you can fault the employee for wanting to change. In my opinion, as long as they stay for the time they signed for and give their employers a reasonable notice before leaving, it is ethical to change jobs as often as desired.
In fact, some people see job-hopping as advantageous, and not just for the employees doing it, but for the companies that hire them. The article by Vivian Giang, “You should plan on switching jobs every three years for the rest of your life” postures that changing jobs often leads to high learning rates and employees being more engaged in their work. I don’t necessarily agree with the latter part about “being more engaged” because I think that more time in a job doesn’t necessarily mean being less engaged. However I do think that changing jobs every few years can be beneficial by stimulating learning and bringing in people with fresh perspectives.
That’s part of my hope for myself if I do end up changing jobs a lot during my twenties -- that at the very least, I’m learning a lot.
Reading 01: Hackers, Ethos | 1.24.18
Does the computing industry have an obligation to engage in social and political issues?
Like any other industry, we can't force it to follow any social or ethical codes except by laws. It would be overstepping governmental jurisdiction to oblige the computing industry to address those topics. So it really comes down to normative questions - should the industry address these issues? To what extent should they be involved? How much is a morally responsible amount? Is it possible the industry could get too involved?
In my opinion, the government shouldn't force companies or industries to "fix" social or economic issues. It's not practical and it's not within the scope of governmental jurisdiction. The public can certainly influence companies and industries, especially if they are the users of the products and services. However, I think that the industry itself should recognize the social, economic and political influences it has, and it should act accordingly.
It would be wise for the industry to develop codes of ethics -- its own set of "laws" determining the ethical and moral roles it should play in society. Like other professional societies of doctors, researchers, or scientists, computer science would benefit from regulation and codes of conduct. The fact that the field of computer science is relatively young contributes to the lack of structure such as professional organizations, regulations or ethical codes. Those structures, however, are critical in creating an industry that lives up to the "hacker manifesto" and in, at the very least, not creating a negative impact on society.
One interesting case relating to the tech world and social issues is wealth inequality. Tech giants in Silicon Valley and elsewhere are making unfathomable amounts of money off of apps, software or other products/services. One article mentioned that Candy Crush Saga is valued at over 7 billion; more than the combined value of eight African nations [which remain unnamed in the article]. One might argue that that's just how capitalism works, and also why progressive taxes exists - as an effort to redistribute wealth and create social safety nets for those who need it. Fair point.
But what about when new tech directly contributes to inequality? For example, if automated trucks take over the shipping industry, what responsibility do the creators of that technology have towards the thousands of newly unemployed truck drivers?
One solution that Silicon Valley decided to test is UBI, or Universal Basic Income. The idea is that everyone receives a basic income (of say, 1,000 or 2,000 a month). It is currently being implemented in Oakland, CA, an area that has been heavily affected by gentrification from the tech sector there. It is yet to be seen whether this approach has social and economic benefits. Yet it is significant because it is an example of an institution assuming responsibility for negative externalities of its actions.
The underlying idea of taking responsibility for one's actions is noble. It has a sound moral basis. However, I don't think that legally making every entity responsible for every negative externality which they create is realistic. It would get far too messy and subjective to calculate. This is why I believe the industry should really work to regulate itself and makes ethical codes to which it should adhere. How the industry decides to address social issues is something it should decide. If tech wants to "save the world", it follows that it would seek to address social issues when possible.
The ethos of the tech industry is reflected in Mark Zuckerberg's letter to investors in which he expounds "The Hacker Way". He describes this mentality as one that promotes "openness", "meritocracy", and creating code that has "social value".
It is true that there are a plethora of companies and non-profits in tech that are working to address social problems - Y Combinator testing UBI is one example. While this is a good thing a surface level, it is important to be critical of the ways in which we try to "fix" social problems.
One potential problem in having computing industry address social issues could be the disconnect to the outside of the tech bubble. Ross Baird writes the following in “Silicon Valley’s Unchecked Arrogance": “Because most of today’s entrepreneurs have their basic needs taken care of, their problem-solving often seems frivolous to the rest of the country.” Essentially, if the industry is going to alleviate social or economic issues, they need to 1) be aware of and 2) adequately understand the issues at hand. This generally requires experience outside the industry bubble, and bringing in people who are experiencing the issues which need work. On the bright side, the gap between people who work in the computing industry and those who do not is being lessened more and more, as an emphasis on diversity is encouraged. Coding programs in K-12 schools, especially in lower-income, under-funded schools is one example of first steps being taken towards developing an industry which is more apt to deal with a wide array of social problems.
So can tech save the world? I have no doubt tech is capable of being a great tool for positive social change. But it's up to the industry to decide where they go from here.
Like any other industry, we can't force it to follow any social or ethical codes except by laws. It would be overstepping governmental jurisdiction to oblige the computing industry to address those topics. So it really comes down to normative questions - should the industry address these issues? To what extent should they be involved? How much is a morally responsible amount? Is it possible the industry could get too involved?
In my opinion, the government shouldn't force companies or industries to "fix" social or economic issues. It's not practical and it's not within the scope of governmental jurisdiction. The public can certainly influence companies and industries, especially if they are the users of the products and services. However, I think that the industry itself should recognize the social, economic and political influences it has, and it should act accordingly.
It would be wise for the industry to develop codes of ethics -- its own set of "laws" determining the ethical and moral roles it should play in society. Like other professional societies of doctors, researchers, or scientists, computer science would benefit from regulation and codes of conduct. The fact that the field of computer science is relatively young contributes to the lack of structure such as professional organizations, regulations or ethical codes. Those structures, however, are critical in creating an industry that lives up to the "hacker manifesto" and in, at the very least, not creating a negative impact on society.
One interesting case relating to the tech world and social issues is wealth inequality. Tech giants in Silicon Valley and elsewhere are making unfathomable amounts of money off of apps, software or other products/services. One article mentioned that Candy Crush Saga is valued at over 7 billion; more than the combined value of eight African nations [which remain unnamed in the article]. One might argue that that's just how capitalism works, and also why progressive taxes exists - as an effort to redistribute wealth and create social safety nets for those who need it. Fair point.
But what about when new tech directly contributes to inequality? For example, if automated trucks take over the shipping industry, what responsibility do the creators of that technology have towards the thousands of newly unemployed truck drivers?
One solution that Silicon Valley decided to test is UBI, or Universal Basic Income. The idea is that everyone receives a basic income (of say, 1,000 or 2,000 a month). It is currently being implemented in Oakland, CA, an area that has been heavily affected by gentrification from the tech sector there. It is yet to be seen whether this approach has social and economic benefits. Yet it is significant because it is an example of an institution assuming responsibility for negative externalities of its actions.
The underlying idea of taking responsibility for one's actions is noble. It has a sound moral basis. However, I don't think that legally making every entity responsible for every negative externality which they create is realistic. It would get far too messy and subjective to calculate. This is why I believe the industry should really work to regulate itself and makes ethical codes to which it should adhere. How the industry decides to address social issues is something it should decide. If tech wants to "save the world", it follows that it would seek to address social issues when possible.
The ethos of the tech industry is reflected in Mark Zuckerberg's letter to investors in which he expounds "The Hacker Way". He describes this mentality as one that promotes "openness", "meritocracy", and creating code that has "social value".
It is true that there are a plethora of companies and non-profits in tech that are working to address social problems - Y Combinator testing UBI is one example. While this is a good thing a surface level, it is important to be critical of the ways in which we try to "fix" social problems.
One potential problem in having computing industry address social issues could be the disconnect to the outside of the tech bubble. Ross Baird writes the following in “Silicon Valley’s Unchecked Arrogance": “Because most of today’s entrepreneurs have their basic needs taken care of, their problem-solving often seems frivolous to the rest of the country.” Essentially, if the industry is going to alleviate social or economic issues, they need to 1) be aware of and 2) adequately understand the issues at hand. This generally requires experience outside the industry bubble, and bringing in people who are experiencing the issues which need work. On the bright side, the gap between people who work in the computing industry and those who do not is being lessened more and more, as an emphasis on diversity is encouraged. Coding programs in K-12 schools, especially in lower-income, under-funded schools is one example of first steps being taken towards developing an industry which is more apt to deal with a wide array of social problems.
So can tech save the world? I have no doubt tech is capable of being a great tool for positive social change. But it's up to the industry to decide where they go from here.
Reading 00: Why study ethics in the context of Computer Science and Engineering
| 1.16.18
Ethics in general is important because people can be bad. People cut corners. I remember being in elementary school gym class and it always bothered me that people literally cut corners when doing warm-up laps around the rectangular gym. Humans are capable of doing great and amazing things, but we can also be lazy, ignorant, greedy, and deceitful. This is why we need ethics. They help create a fruitful, equitable and (hopefully) happy society.
So ethics are important to any field, really. But it makes sense that the greater the societal impact of a given field, the more power that group of people wield in the society and therefore the greater social responsibility they have to act ethically.
Software is increasingly becoming an integral part of every major infrastructure on which we rely. Software is used for transportation, in classrooms, in grocery stores, in research and by farmers. Software is everywhere. And software is the future. And if software is the future and people make software, then people are creating the future. What kind of future do we want to create? Net neutrality was recently repealed for example -- is the future we want one where the internet is no longer a free, open and innovative resource?
Computer scientists are software engineers have a great responsibility to act ethically and according to morals because are making decisions that affect the millions or billions of people who use their software (and possibly those who don’t). Facebook has 2.07 users who are all influenced by the software decisions of its 23,165 Facebook employees.* The new phenomenon we are facing is that small groups of people are beginning to have very big influences on humankind. Therefore, it is important that we take seriously the adage “with great power comes great responsibility”. Because the scale of the “great power” in this scenario is the power to influence the future of humankind. That equates to a whole lot of responsibility.
Ethics in software design is also an issue of dependence. Relatively few people can even code in one programming language, much less work as a software developer. Even more frightening, all of the users of software who are not involved in software developing (like me) are often not aware of or have a working understanding of the issues in that field, despite software being pervasive in our lives. Jonathan Harris highlights this idea in his vignettes Modern Medicine, making the analogy between computer scientists and farmers-- “we should be able to trust those of us who are to build us nourishing spaces and tools, in the same way we trust farmers to grow us good food and architects to build us good buildings.”
A reasonable objection to the thought that software engineers are responsible for the effects of the code they write is that consumers have free will. It is their choice whether or not to use the software. When users begin to use software, they create a dependency and give up a small portion of their freedom. Andy Ko discussed how one can think of code as a social contract in which coders are granted power by their users. To that end, developers need to accept the responsibility for the code they are writing.
One example of the impact of one’s code is seen in the experience of Bill Sourour: working at a coding job in a marketing firm, he was asked to create online quiz that helped people determine if a certain drug could be beneficial for them. However, the quiz he was asked to code was deceitful, and it turns out that many people committed suicide due to the side-effects of that drug. Coders may often be the last line of ethical defense in some situations.
In the end, ethics in computer science are important because of the scope of the impact that software has in our world both today and in the future. It would be ignorant to not bring ethics and morality to the forefront of developer conversations today.
*https://en.wikipedia.org/wiki/Facebook
So ethics are important to any field, really. But it makes sense that the greater the societal impact of a given field, the more power that group of people wield in the society and therefore the greater social responsibility they have to act ethically.
Software is increasingly becoming an integral part of every major infrastructure on which we rely. Software is used for transportation, in classrooms, in grocery stores, in research and by farmers. Software is everywhere. And software is the future. And if software is the future and people make software, then people are creating the future. What kind of future do we want to create? Net neutrality was recently repealed for example -- is the future we want one where the internet is no longer a free, open and innovative resource?
Computer scientists are software engineers have a great responsibility to act ethically and according to morals because are making decisions that affect the millions or billions of people who use their software (and possibly those who don’t). Facebook has 2.07 users who are all influenced by the software decisions of its 23,165 Facebook employees.* The new phenomenon we are facing is that small groups of people are beginning to have very big influences on humankind. Therefore, it is important that we take seriously the adage “with great power comes great responsibility”. Because the scale of the “great power” in this scenario is the power to influence the future of humankind. That equates to a whole lot of responsibility.
Ethics in software design is also an issue of dependence. Relatively few people can even code in one programming language, much less work as a software developer. Even more frightening, all of the users of software who are not involved in software developing (like me) are often not aware of or have a working understanding of the issues in that field, despite software being pervasive in our lives. Jonathan Harris highlights this idea in his vignettes Modern Medicine, making the analogy between computer scientists and farmers-- “we should be able to trust those of us who are to build us nourishing spaces and tools, in the same way we trust farmers to grow us good food and architects to build us good buildings.”
A reasonable objection to the thought that software engineers are responsible for the effects of the code they write is that consumers have free will. It is their choice whether or not to use the software. When users begin to use software, they create a dependency and give up a small portion of their freedom. Andy Ko discussed how one can think of code as a social contract in which coders are granted power by their users. To that end, developers need to accept the responsibility for the code they are writing.
One example of the impact of one’s code is seen in the experience of Bill Sourour: working at a coding job in a marketing firm, he was asked to create online quiz that helped people determine if a certain drug could be beneficial for them. However, the quiz he was asked to code was deceitful, and it turns out that many people committed suicide due to the side-effects of that drug. Coders may often be the last line of ethical defense in some situations.
In the end, ethics in computer science are important because of the scope of the impact that software has in our world both today and in the future. It would be ignorant to not bring ethics and morality to the forefront of developer conversations today.
*https://en.wikipedia.org/wiki/Facebook
Introduction
Hello! My name is Molly Smith and I'm a junior at the University of Notre Dame majoring in Economics and minoring in Computing and Digital Technologies. I love being a student because learning new things is cool and changes how one thinks and interacts with the world (which is super cool!). But when I'm not busy being a student, I like to rock climbing, keep up with current events and politics, read, do yoga, and spend time with friends.
I'm studying economics because I'm interested in the ways that good economic policies can benefit societies, and I really enjoy the intersection of math, modeling and numbers with ideas like consumer behavior, scarcity and optimization of resources. I chose Computing and Digital technologies as my minor because I enjoy the challenge of coding. I love learning new languages--I've taken eight years of Spanish classes and spent nine months in Spanish-speaking countries. Learning programming languages is similar to learning a new spoken language, so that aspect is another draw.
In the class Ethical and Professional Issues, I'm excited to delve into a wide array of ethical topics in technology, particularly learning more about the net neutrality debate considering the recent repeal of net neutrality. To be honest, I don't have much of an opinion on "the most pressing issue" facing computer scientists because I'm not that knowledgable or up-to-date on the current events in this field. However, that's another thing I hope to take away from this class - knowledge of moral and ethical issues facing computer scientists and a stance of my own on each issue.
I'm studying economics because I'm interested in the ways that good economic policies can benefit societies, and I really enjoy the intersection of math, modeling and numbers with ideas like consumer behavior, scarcity and optimization of resources. I chose Computing and Digital technologies as my minor because I enjoy the challenge of coding. I love learning new languages--I've taken eight years of Spanish classes and spent nine months in Spanish-speaking countries. Learning programming languages is similar to learning a new spoken language, so that aspect is another draw.
In the class Ethical and Professional Issues, I'm excited to delve into a wide array of ethical topics in technology, particularly learning more about the net neutrality debate considering the recent repeal of net neutrality. To be honest, I don't have much of an opinion on "the most pressing issue" facing computer scientists because I'm not that knowledgable or up-to-date on the current events in this field. However, that's another thing I hope to take away from this class - knowledge of moral and ethical issues facing computer scientists and a stance of my own on each issue.