Didn’t you see my status?

It’s fairly common knowledge that Facebook uses a specific algorithm to determine which items show up on your news feed.  Although us mere mortals outside of the site don’t know exactly how it all works, we do know that it has to do with a variety of factors, including word usage and how many people like or comment on the status.  Caleb Garling, a writer from The Atlantic, tried out a little experiment where he posted a fake status but used words indicating that something big and important was happening. 

FB Algs

Once Facebook picked up on phrases like “big news,” “so excited to begin” and “all of your support,” it began putting Caleb’s status at the top of his friends’ news feeds.  As his friends continued to like and comment, Facebook showed the status to more and more people. 

It’s an interesting set-up and makes a lot of logical sense.  You’d much rather hear about someone’s big life change instead of what was for dinner, right?  But coming from personal experience, Facebook’s algorithm has often been a disappointment to me.  I’ve noticed that I see posts from the same exact people over and over again…usually those annoying people from high school ranting about how life sucks.  Sometimes there’s a good variety of posts on my news feed, but other times it feels like I only have 10 Facebook friends. It can get frustrating.

I’d be curious to see what would happen if the site created some sort of personal algorithm for each user. Rather than tracking the phrases of what generally interests people, the site could track phrases related to statuses you tend to like or comment on.  This would be a sort of personalization for the user and would also guarantee updates that interest the user. I imagine our social networking communities would become smaller and more tailored to what we care about. This would be a positive change for the user, but it would limit the global characteristics of a social networking site.  To every positive, there will be a negative, but we have to determine if the change is worth it.

#droptheplus

It’s happening…and boy am I glad!  I saw an article on Buzzfeed last night about one Australian model who is supporting the campaign against the term “plus-size.” The model, Stefania Ferrario, called the term misleading and damaging to young girls.  She herself is labeled as plus-size and argued that it is not an empowering term for her to be called. Yes, yes and YES!

You might recall when I wrote about how harmful this particular term is. This label is given to so many normal-size women and unfortunately is often interchanged with the word “fat.”  Plus-size has a negative connotation to it, and mislabeling/trying to glorify women who are not actually plus-size does more harm than good.  It is so encouraging to see someone from inside the fashion industry take a stand on this.  And not only that, but supporters are joining in on social media with #droptheplus. The campaign is a great start, but lighting the fuse on social media has the potential to reach millions of people all over the globe. The fashion industry uses mass communication and social media to its advantage, but now critics of the industry can use the same tool to fight back.  This wouldn’t have been possible even five years ago. The change a simple hashtag can bring about is amazing and will undoubtedly put some steam behind the campaign.

CAsJHUlUgAA1ngQ

Analyzing my tweets

After Sarah’s presentation today in class, I decided to see what all the fuss is about and try out Analyze Words.

Here are my results:

Analyze Words

Overall I think it was pretty accurate.  My emotional style is usually upbeat because I have a tendency to put my best foot forward on social media.  I wouldn’t want my tweets to be all negative.  That’s the type of account I wouldn’t care to follow myself.  That being said, many of my tweets were categorized as worried and depressed.  I know I use Twitter once in a while to rant about something or put in my two cents on current events.  But depressed?  I never viewed my tweets like that.  A little harsh don’t you think, Analyze Words?

As for my social style, I was glad to see my tweets categorized as plugged in and personable.  Since I’m hoping to soon be using my account as a professional journalist, those are two categories I think are important.  If my readers look at my tweets, I would hope they’d see my thoughts as in-the-know and relatable.  It’s a way to connect with readers on a more personal level.  I was glad to see my thinking style as analytic and sensory for the same reasons.  I would hope my tweets could add some sort of insight to a situation, even if I am just ranting.  Honestly, I would have expected more in-the-moment tweets since that’s when I have the most to say.  My tweets aren’t usually planned.  I tweet when experiences or moments inspire me to say something.

I’ve got to say this site is pretty cool.  There were a couple of questions I had about the analysis, but I think the most helpful aspect of it is that it gets you thinking about what you write.  It’s an easy way to get some outside feedback on your account.

Getting the next generation to look up

On Tuesday our class talked about how new technology has affected our attention spans and essentially rewired our brains.  This could turn into a major problem, but honestly, will it convince us to put down our smartphones? Absolutely not. So where do we go from here? How can we avoid this problem? Let me suggest that perhaps the solution should start at home.

The human brain is mostly developed during childhood, especially before the age of 10.  In the first months of a child’s life, each neuron in the child’s brain is attaching itself to about 15,000 other neurons.  This is incredible stuff, but it also means the way a child is raised from infancy is vitally important to its growth into adulthood.  Ultimately, parents, guardians and caregivers of children play a large role in how the child develops.  Studies have shown that children who are exposed to fast-paced, colorful, and highly stimulating media tend to be at a higher risk of attention problems. This goes hand-in-hand with new technology, such as iPhone/iPad games, video games, stimulating television shows and the general need for information at the tip of one’s fingertips.  If a child is allowed to play video games for several hours a day or if a 10-year-old is given unlimited access to a smartphone and Instagram, then yes, I would argue that this causes a rewiring of the brain.  However, if a child’s technology use is monitored and the guardian encourages other types of entertainment, then a lot of damage would be avoided. Our generation may be addicted to our technology, but we still have the ability to put our phones down once in a while because we remember a time when they weren’t always in our hands.  By limiting a child’s technology use, you can give him/her experiences outside of a screen.

Unanswered questions of science

Earlier this week Clay shared a video on his blog with Dr. Daniel Siegel explaining the harmful effects of looking at screens before bedtime.  The photons from the light of the screen discourage your brain from releasing melatonin and thus disrupt your ability to fall asleep.  Scientifically, this all makes sense, and I think we can all comfortably agree that this information is true.  But I would argue that Dr. Siegel left out a vital part of the discussion: how do the size of the screen and the amount of photons play a role in disrupting sleep?

I’m really asking this question based on personal experience.  When I watch a show on my laptop before bed, I have noticed that it’s harder to unwind and fall asleep after I’ve finished watching.  Usually I attribute this to a gripping plot that has stimulated my brain in some way, and while that’s bound to be partially true, I now know the photons are another cause.

laptop

But here’s where I start to question.  I scroll through Facebook, Twitter and Buzzfeed every night on my smartphone before I go to sleep.  It’s a way for my body to relax and get sleepy.  Several times I’ve caught myself dozing off with my phone still in my hand and my Buzzfeed app still open.  I’m willing to admit this has turned into a bad habit–to the extent that I have trouble falling asleep if I don’t first scroll through my phone.  And yes, this ritual technically takes away some of the time I could have spent sleeping, as Dr. Siegel also mentions. 

But why does my laptop keep me awake and my smartphone help put me to sleep? Is it because the limited amount of photons coming from a phone screen as opposed to a laptop screen are not enough to bother me?  Or is it just a fluke of my inherent sleep patterns working along with the bedtime habit I’ve created?  I would love to know these answers, and I think they’re important aspects that have been left out of the discussion.  If I had to take a wild guess though, I’d say it’s all of the above.

Too good to be true

Not too long ago, I wrote a piece about Cindy Crawford’s untouched photo that went viral a couple of weeks ago. Unfortunately, it all turned out to be a fake.

Apparently someone got hold of the photo and tweaked it to make her stomach look less than muscular and her legs to have the forbidden cellulite.  Two lies are at play here: the photo was not untouched, and the photo is not even really Cindy’s body.

I was disappointed to hear about this, mostly because of all the good the photo could’ve done.  There are only a handful of celebrities who have actively stood up against Photoshop.  If Cindy Crawford, a world-famous legend of a supermodel, could have an untouched picture released and then fully embraced her true body in the public eye, think of all the women this would’ve inspired. It takes courage to show your stretch marks and cellulite to the world, and unfortunately it’s still considered a sin to have either of those.

Ultimately, what has this fake done? More harm than good, I would argue. Now most people will go back to looking at Cindy and other supermodels as they always have: unattainable goals of beauty.  Cindy Crawford is so perfect that someone actually had to Photoshop cellulite onto her body to bring her down to the level of us mere mortals.  I don’t know about you, but that kind of makes me feel worse about myself.  How could we ever measure up to that? Also, there is no longer the potential for a new role model for girls (and boys) out there struggling with body image. We’ve really lost out on a good influence here, although I think Cindy Crawford should still take a stand against Photoshop and the subsequent body shaming.

So to the person who doctored the photo: what have you accomplished? You’ve made a mockery out of yourself and out of millions of perfectly normal women.

The art of remembrance

 

There is nothing quite like the human brain.  I’ve mentioned it in previous posts and I’ll mention it again: no technology (I don’t care how good it is) could ever replace the capacity of intellect we are born with.  Each and every second of our lives, neurons are firing.  Our brains record images, sounds, smells and feelings and then process the information in a way that makes it mean something.  So when we try to replace human functions with new technology, there’s no way to tell the full effects of it…yet.

That’s what fascinated me about Tu’s article in The Atlantic.  We are essentially relying on technology to hold our information instead of entrusting it all to our brains.  When we do this, two things can happen: we either look back on ordinary events captured by technology with pleasure or we don’t have detailed memories of these events at all.  Tu addresses this point in the article and draws on the knowledge of psychology professor Linda Henkel.

 

“Based on this experiment, Henkel argues that when we photograph something, we end up relying on the camera as an external memory source.  ‘With cameras, what we seem to be doing is outsourcing our memories–we expect that the camera’s got it. The camera’s got the picture,’ she says.  The problem is, we end up brushing aside the whole moment as well.”

 

But what does this information mean for us?  It’s a little too much to ask technology users to monitor their usage in the name of the human brain.  We want life to move faster, be clearer and be available at our convenience, while still maintaining its meaning.  It’s a tall order to be sure. 

Here’s what I suggest: let’s create technology that complements what our brains can already do, instead of trying to replace it.  We need implanted technology that allows us to experience a moment but also capture it at the same time.  Want to take a picture? Cue the technology to take a photo based on what your eyes are already seeing.  Instead of having to hold up a camera and take the time to get the right shot, you could literally blink and have a photo.  Admittedly, I’m no engineer.  I don’t know what this idea would entail, maybe a headpiece similar to the Google Glass model.  But what I do know is that something like implanted technology can act as a back-up to our brains, and both methods of recording memories would be able to flourish.