Journal Entry Week 8 20221017 - klmartinez/DSF GitHub Wiki

I found our assigned journal club reading particularly thought-proving this week: Making Kin with the Machines.

I hadn't really thought about the implications of AI on society beyond the more abstract thoughts kindled by watching sci-fi movies such as Ex Machina, A.I. Artificial Intelligence, I Am Mother, the Matrix, etc. The potential impact of AI on humans and our world felt remote and irrelevant, but reading this article made the topic seem more urgent. A common theme of many of the AI-focused movies I have seen have been centralized around AI machines rising up and destroying or dominating the human species. More than anything, this reflects our own human history of brutal conquest and slavery. Why do many people misinterpret and then fear feminists believing that our goal is not equality but dominance? Because that is the current patriarchal infrastructure in place now. The same fear occurs when it comes to AI technology and machines - we create narratives of "them" vs "us" while reflecting our own true villainy onto "them".

And I think this fear is not unfounded. Already many technologies, including AI ones, have been curated or commandeered for nefarious means such as for military purposes. Which then brings up the question of ownership - in the context of not only AI but other technologies, software practices, and datasets (as we've discussed during open science lectures & discussions). Who would own this technology? I think the common assumption is that the original creator will forever be the owner of the created. I even see this in my own family dynamics - I grew up being blatantly told that I belong to my mother because she made me. But this ownership leaves the created at the mercy of the creator and their own personal goals and ambitions. Shared ownership is now a more prevalent idea, but then who will be responsible for making sure new technologies aren't utilized for unethical purposes? Who will enforce these rules? Is it moral to assign ownership at all?

With my research now, I have been solely focused on utilizing the datasets and technology given to me to help me develop bioinformatic skillset rather than focusing on whether the questions I am asking are resulting in outcomes that positively affect the world around me. I often feel distant and detached from the projects themselves especially as I've moved into a postdoctoral role where I hop on and contribute to several projects as opposed to leading and focusing on one project. But to step up and take full responsibility for my scientific contributions feels daunting. It seems almost impossible to become an expert not only in the methodology I've been trained to carry out, but also an expert in understanding all the potential implications and future impacts of one's own research - I feel that all knowledge, even when first created with the best of intentions, can be twisted and corrupted. Additionally, genetic research in particular has been used to support the worst atrocities and still drives the eugenics movement today in more subtle and sinister ways. It feels like I am fighting against generational trauma, where the long history of genetic research has often only contributed to violence. How do I break the cycle?

For this week's group challenge we did the following:

  1. Forked over a repository created by Greg Chism
  2. Cloned this repo into a CyVerse Rocker RStudio Verse (latest) app
  3. Reproduced the R code we created together during our Tuesday lecture
  4. Created additional plots using the tools we learned during our Tuesday lecture
  5. Pushed the changes to GitHub
  6. Saved a new dataset to the output folder in Cyverse

The updated R code can be found here.

For FOSS:

Something I was introduced to in FOSS was the idea of package managers to make sure that I have documented the primary software and dependencies and their versions. I need to look into this more and incorporate this into my daily habits.

Useful links: