How Might We… Stimulate Information Sharing?

you-x-ventures-NYMJYXfZG-g-unsplashBefore I went on an extended hiatus, I used to try and string together a few posts into a series. In that same vein, I thought I’d start another one of those, but I won’t specifically string them together by appending “Part 1, 2, etc.” on the end, nor will I necessarily link to previous posts in the series. It’ll be more like an anthology series, in that each post would be able to standalone. This series, the “How Might We…” series is a way to inspire us to think broader. To think blue sky-y. Imagine the possibilities.

OK. With that aside, let’s move onto a bit of pedantry. I spent far too much time hemming and hawing over using the subtitle “Incent Information Sharing” or “Incentivize Information Sharing.” Incentivize is one of those words that’s been unnecessarily created (the verb from incentive is incent, there’s no need to go on and add an -ize). I very nearly stuck with “incent” because its very definition (i.e. rewarded for doing something) is what I meant to be discussing. In the end, it felt more important to use accessible language (i.e. incent is a tad jargon-y). Alright, so let’s dig in.

I saw a tweet from Scott Galloway (see below) the other day that made me want to reflect on the idea of “information hoarders.”

Scott’s keying in on managers and it’s certainly important for managers to be more forthcoming with information, but there are far more non-managers than there are managers. Couldn’t it subvert the problem if there were incentives for employees to be sharing information within their teams? How about… within their divisions? What about… across silos?

From a private sector standpoint, one could argue that the information-sharing might stop at the company’s edge, but for folks who work in government, as long as you’re not divulging any secret/protected information, is there any reason why we can’t be more openly sharing information between departments? That’s not a rhetorical question, I’m actually asking.

To my mind, I think the frame we have is all wrong. Everything starts as being close to the vest and then we decide what we’ll share after-the-fact. I can understand why things would have begun this way, but I don’t necessarily think that it’s in our (both the government and the Canadians we serve) if we continue this way. What if we flipped the switch and everything were open tomorrow (gasp!). OK, that’s a bit off-the-ledge, so let’s frame it this way. What if, starting tomorrow, everything we did started from a frame of being open? Meaning, if something were meant to not be shared, then we’d have to specifically identify it as such (a little bit of behavioural economics, eh?).

I have no doubt that there’d probably be accidental goofs, but would the tremendous amount of openness allow the government to better realize efficiencies within itself? I bet that there are things that some folks might now in one corner of some department that would be helpful to some other corner of a completely different department, but that there’s no obvious way for the information to get from A to B. Simply opening things up won’t necessarily mean that there’ll be a connection, but there’s a far greater chance that it would, than if there weren’t openness.

Sure, maybe it’s easier for me to propose an idea like this because I don’t currently have any vested authority to implement this kind of an idea and that’s why I want to key in on that first pedantry discussion — incentives. While it’s possible that an executive might, out of sheer principle, decide to swing open the doors and ask everyone to share openly, I don’t expect that that will be the prevailing opinion. Instead, we’ve got to find a way to make it halfway required — incentives. Incorporating this into a performance management agreement might be a good place. If one will be evaluated on one’s openness, then one is probably far more likely to be open (at least, that’s how the theory goes).

So, what might this look like in practice? Well, let’s start with a simple, relatable example — Outlook Calendars. Have you ever added peeked at someone’s calendar to try and plan a meeting with them? Of course — I’m sure nearly everyone has. My guess is that when you were doing this, you noticed that their calendar only showed you busy/free times, unless you happened to have been given special access to ‘view’ the subjects/locations of the meetings. And, if you’re super-lucky, you might even be able to OPEN those meetings and see the agenda/content. What if, hmm, what if, the default was that our calendars were open? (Gasp!) Would that be scary at first, sure! Maybe you don’t want people to know that you have a bi-weekly meeting with your podiatrist? OK, so what can you do? Well, fortunately, Outlook has a way to make those meetings “private,” so even someone with “access” to view your calendar wouldn’t be able to see it.

And I know, this idea isn’t new. I’m sure that others have proposed it and even tried to implement it within their teams or with the executives in their reporting line. The difference here in what I’m suggesting is that there’s an element of ‘requirement’ by way of a performance agreement.

I don’t expect this to change overnight, but wouldn’t it be wonderful if you take a new job in a new department and when you get there, you’re able to view meetings on your Director’s and your Director-General’s calendar!? Wouldn’t it be so cool to know that one of your executives is meeting with an executive from a different department on a subject that you know about because of your time in a completely different department, so you tell your manager, who tells the executive and then you find yourself in the meeting, too, because your perspective is invaluable?

Electing Officials to Represent the “Future”

bob-blob-ycW4YxhrWHM-unsplash.jpgI was catching up on some podcasts this weekend and I heard a particularly interesting one — Ezra Klein interviewing Astra Taylor. The conversation is wide-ranging, but there were a few bits that stuck out to me.

Astra talked about the differences between democracies, aristocracies, and lotteries, and discussed the idea of every citizen serving in the political body at some point. Not entirely a new point, but one thing that stood out was that there’s nobody in the political body who’s sole job it is to mind the “future.” As in, where’s the Congressperson or MP who’s elected to represent the people who have yet to be born? And I don’t mean the choice/life dichotomy, I mean — what about the child who’ll be born three generations from now and has a right to clean air and water.

She also raised the point that some could make the argument that we’re violating the Constitutional rights of people not born, yet (/mind–blown!).

I want to circle back to the Congressperson/MP to be the one minding the future. Of course, there isn’t someone like this (at least not in any of the political bodies I’ve seen), but wouldn’t it be cool if there were? Wouldn’t it be cool if we had an “at-large MP” who were part of the House of Commons and there sole role was to take into account (or represent?) the interests of the people who were to be born seven generations from now (I don’t think I’ve talked about this on here, but there’s a whole movement around “for the next seven generations“).

And this made me think about some of the founding documents of our nations. Take, for instance, the US Congress. By all accounts, there’s quite a bit of intransigence in the way the system is arranged. Some argue that this is intentional and some argue that the business of government has ground to a halt. What if… what if we were to remake the business of government in the US? I know, I know, it would never fly, but let’s just imagine a world where we can redesign the Senate, redesign the House, etc. Would it look the same after we were done? Probably not. Would we include representation for people who are to be born seven generations from now? Also, probably not, but I’d like to think that maybe we would. Maybe we would think beyond ourselves in this moment about those who will inhabit the space in the years to come.

Quick Thoughts: Planning Fallacy, Sci-Fi, Gendered Language, and Scarcity/Excess

glenn-carstens-peters-RLw-UC03Gwc-unsplashAs I look to breathe some life back into writing, I thought I’d take a quick peek at some of the “drafts” I had saved from when I used to write regularly. Fortunately, there aren’t too many there. In the interest of trying to start fresh, I thought I’d do a quick post addressing some of these ideas kind of in the same way that Wikipedia has stubs.

Planning Fallacy: Many years ago, I wrote about the planning fallacy as part of my series on cognitive biases (i.e. how to make decision better). Something I didn’t talk about in that post was the difference between a 7-day and 5-day workweek. Let me explain. For those that go to university, everyday is eligible for a “work” day (i.e. homework). Many things are due on Monday morning and rather than push to complete something on Friday afternoon (or night?), students will often be writing things on Sunday night. Not only am I drawing on my experience as a student, but I’ve been teaching for the last 7+ years and I can say with authority, if I give students a full week (7 days) to complete an assignment and make an assignment due at 1159p on Sunday night, 50%+ of the class will complete that assignment sometime on Sunday. I’m digressing a bit from the point. So, we get used to this “7-day workweek” to complete things. When we move into the work world, that “week” shifts from 7 days to 5 days (and even less in cases of holidays or even less than that if you take into account mandatory meetings, etc.). So, when someone estimates how much time they’ll have to complete a work product, they’re used to (primacy effect?) how they were estimating when they were a student and don’t take into account the ‘truncated’ week.

Science Fiction, Humans, and Aliens: In those sci-fi movies that have some form of alien (i.e. non-human), often times, there’ll be a scene where the humans are kept in a cage. It made me think about how humans keep some animals in cages (i.e. zoo). Maybe to a different species, we (humans) would be treated in the same way we treat animals.

Gendered Language: There was a journal article from a few years ago that caught my eye. Here’s a bit of the abstract:

The language used to describe concepts influences individuals’ cognition, affect, and behavior. A striking example comes from research on gendered language, or words that denote individuals’ gender (e.g., she, woman, daughter). Gendered language contributes to gender biases by making gender salient, treating gender as a binary category, and causing stereotypic views of gender.

Like I said yesterday, if you’ve been following me for any sort of time, this bit about our words having an effect on us shoudn’t come as a surprise. However, this journal article is strong evidence to keep in your pocket if you need to point to something evidence-based in a discussion.

The Problem of Excess: Another good journal article that I’ve been saving for over 5 (!) years. /facepalm. I still remember my first grad school economics course and the professor was explaining the fundamental principle that underlines economics — scarcity. I wanted to raise my hand and disagree on the merits, but it didn’t seem appropriate. I can see and understand how scarcity came to be the dominant theory of the day, but a part of my being just feels that that interpretation is… near-sighted. Seeing this journal article a few years after that class highlighted a different perspective. Here’s a bit of the abstract:

This article argues for a new branch of theory based not on presumptions of scarcity—which are the foundational presumptions of most existing social theory—but on those of excess. […] It then considers and rejects the idea that excess of one thing is simply scarcity of another. It discusses the mechanisms by which excess creates problems, noting three such mechanisms at the individual level (paralysis, habituation, and value contextuality) and two further mechanisms (disruption and misinheritance) at the social level. […] It closes with some brief illustrations of how familiar questions can be recast from terms of scarcity into terms of excess.

Enjoy your weekend!

Are You Full: What’s in a Norm?

pablo-merchan-montes-Orz90t6o0e4-unsplashLanguage matters. Belief matters. Thoughts matter. How we speak to each other matters. How we speak to ourselves, matters. All of it. If you’re reading this, these ideas probably aren’t news to you, so I want to take this to a concrete example and then, zoom out to consider its effects.

There are many things that unite the human experience, but one that is absolutely universal — eating. We all need to take in nourishment (nearly all of us — daily) to sustain our existence. While there are examples of the human body being able to sustain itself without food for three weeks, I can’t think of anyone who would recommend doing so with any kind of regularity. Some folks will eat three times a day, some folks will eat five times a day, and some will eat more or less.

Depending upon your upbringing, where you grew up (geographically), and the kinds of events that were influencing the time (or had an influence on your lineage), you may have a particular relationship to food and eating. Some may have been trained to eat everything on their plate, so as not to ‘waste’ the food (ignoring any kinds of signals from your body). Some may haven’t been able to eat as much as they wanted because there wasn’t enough food to go around. Some may have had an abundance of food and never had to worry about either of those things.

Let’s focus on the scenario where people were trained to eat all the food on their plate. There are very good reasons as to why someone may have had this message communicated to them — (i.e. growing up in the Great Depression, there wasn’t a lot of food to go around, so if you had food, you ate it). This same way of thinking would have been passed down to the next generation and when they became parents, they would have said the same thing to their kids (even if they were living in conditions that would have seemed “rich” compared to the situation in which they grew up in when they were kids).

One step forward from this is being “full.” Have you ever been asked by someone whether you’re “full?” Have you really thought about what that means and how you’re answering? “Have you eaten so much food that you could not eat a single bite more or else you’d burst at the seams?” Is your “cup” (i.e. body) filled to the brim, such that any additional drop of water would case it to overflow? Do you really want to feel “full” after you eat? [NOTE: I’m abundantly aware that there are people out there who don’t get enough to eat and are perpetually hungry and if you happen to be one of those people reading this post, then please forgive what may seem like insensitivity. While there is obviously a problem on the one hand where people don’t get enough to eat, there is also a problem on the other hand, where people eat too much — and that’s what I’m trying to address here.]

Right, so, “are you full?” What an awful question to ask someone. A question where the norm being held up as positive is, “being full.” As a parent, I’m forced to confront commonplace words and phrases on a daily basis. Do I want my kids to develop an aggressive language for when they’ve completed a task or accomplished a goal (i.e. Yeah, I “killed” that test). Why would I want them to bring “kill” into their daily lexicon? Such a violent word. Such a misplaced word (in this context).

When it comes to eating, the question that I hone in on, “are you all done eating?” And if it’s the last ‘meal time’ of the day, “are you all done eating for today?” This has the added benefit of trying to make sure that they’re not going to emerge from their bedroom 10 minutes after you’ve put them to bed and ask for more food. Anyways, yes — “are you all done eating.” It might seem simple in that it’s still a binary question, but the inherent norm is expunged. No longer am I priming for one to feel like they’ve eaten “so” much food. Instead, the person can rely on their internal bodily signals to indicate to them whether they’re done eating “for now.”

I did say I wanted to “zoom out,” so here goes — worldwide obesity has tripled (!) since 1975. More people live in countries where people die as a result of being overweight than underweight. (Wait, what?!) Yeah, that’s right. About 40% of adults (18 and up) — globally — are overweight. Forty percent. FORTY PERCENT! How can we possibly be eating so much. [NOTE: I recognize that there’s a solid argument to be made here about what we’re eating vis-a-vis super-sizing our diets with sugar and corn, but let’s park that for now.]

Diet is absolutely important, but maybe even more important is our relationship to food. Are we eating to “get full” or are we eating to nourish ourselves? Are we eating until we’re done eating, until our body gives us those signals that tell us it’s time to stop eating, or are we trying to clean our plates and ensure nothing goes to waste?

What is “Artificial Intelligence,” Anyway?

14450262598_f16dddfdc3_z_dSometimes, I wish I could go back to 1955 and prevent John McCarthy from calling it “artificial intelligence.” It’s a term that, depending upon where you work, you can’t go 5 minutes without hearing once or twice — which is great. It’s great that people are looking to the ‘future.’ It’s great that society is pushing forward with growth and expansion and all that warm and fuzzy stuff. Unfortunately, AI doesn’t really do justice to what it’s describing.

AI isn’t really “artificial” nor is it really “intelligent.” In fact, you could even argue that AI is really really dumb (wait, what?!). Yeah, dumb. Caveat: I’m speaking about the kind of AI that exists in this moment. If scientists can crack artificial general intelligence (i.e. Terminator, Hal, etc.), then, well, then that’s a whole new ballgame. But right, AI, as it exists right now can be thought of as a sort of ‘idiot savant.’ It can do the tasks that we tell it to do and do them extremely well.

Did you catch that? Let me say it again. It can do the tasks that we tell it to do and do the me extremely well. And that right there is the hitch. I can’t leave an AI at your doorstep an expect it to make you dinner. I need to give it some direction (NOTE: this is assuming that there isn’t some AGI out there that hasn’t be released). Maybe I give it a command like “make dinner” or “wash the dishes” and then it follows the rules/algorithms for navigating the space inside your hour or apartment to get to the kitchen, find the fridge (or the sink), and continue forward with its work.

When you think of it that way, that’s not really “intelligent,” is it? Nor is that really “artificial,” is it? And it’s certainly not artificial intelligence. Instead, it’s more like task automation. Granted, it’s a bit more sophisticated than that (any AI expert reading this is probably thinking I’ve lost my marbles), but that’s another thing that’s frustrating about nebulous terms like AI — they mean something very specific to the people that work in that field and to everyone else, it’s jargon. The problem with a term like AI is that the entertainment industry has given us plenty of images of what a fictitious AI might be able to do and so having a reasonable conversation with someone not versed in the particulars on the topic of AI can be daunting.

Circling back to the task automation bit — to set the minds of AI experts at ease — I know, it’s not just task automation. It’s task automation that’s informed by reams of data (even that might get me into trouble with some folks who want to be more specific). That’s what makes it seem like there’s some kind of ‘magic’ at play. So, if the AI at your front door had reams of data about how you load your dishwasher or about how cities of people load their dishwasher or if it knew all the recipes that you might select from, how often you select and on what days, etc. Data. Data is the fuel that pushes the ‘task automation’ forward.

~~

My point in discuss some of the finer points of AI today was not necessarily to get into the weeds of its definition, but more to illustrate that there are terms out there that have a very specific meaning to some folks, but when widely discussed by non-experts, could mean something very different. This reminds me of something I wrote a few years back about “the Economy.” It can mean something very different depending upon to whom you’re talking. For better or worse, AI seems to be one of those phrases and I’m sure it’s not the only one. I’m sure there are others out there. Can you think of any terms in your field that you’ve heard discussed in the popular press that seem to, rightly or wrongly, oversimplify its meaning?

Where Kindness Reigns

be-kind-1549737174ey4A few nights ago, the fifth episode of the seventh season of The Amazing Race Canada aired. If you’re not familiar with “The Amazing Race,” the idea is that teams (usually pairs) ‘race’ around cities and/or countries, while undertaking a series of tasks/activities. Every so often, some teams will be eliminated for being the last team to complete an activity (technically, arrive to a particular destination).

On the episode in question, oh wait, **SPOILERS AHEAD**

OK. On the episode in question, there were seven teams remaining. One of the tasks required the teams to dig on a beach for clams (Deep Bay, BC, for those keeping track). Each team had to find close to 100 clams in total (some of specific species). Teams’ clams were checked by a marine biologist. The task looked super-arduous, as the clams were simply sitting on top of the wet beach. No, instead, teams had to use their hands (or shovels) to dig into the sand.

Sometimes, the tasks assigned by the show can be quick (teams can be in and out in under 30 minutes). This task, however, seemed to take a long time, especially for those teams that elected to complete the tasks. Oh yeah — one more important process point. If a team decides they don’t want to “do” a task, they can take a 2-hour time penalty and skip the task. There’s certainly a risk/reward because if it takes the teams who complete the task less than 2 hours, then, effectively, having taken the penalty will significantly increase the penalized team’s chances of finishing last (and possibly being eliminated). Conversely, if even one team takes longer than 2 hours to complete the task, then the penalized team will have been ‘rewarded’ (in a sense) for electing the penalty.

Back to the beach.

So, some teams have arrived and are digging the beach. When one team, in particular arrives, they spend a nominal amount of time and then elect to take the penalty. This is a team that has already done this in other parts of the race. As soon as they make this choice, it’s clear that some of the other teams aren’t pleased. There appear to be some group norms around the Race and how a team should behave (i.e. completing tasks, not skipping through with 2-hour penalties in tow).

Fast-forward a little bit and there are two teams remaining at the beach who are left looking for a particular kind of clam that has proved elusive. All the teams that have left have either completed the task (I believe just one) or taken a penalty (the other four). The two teams still there are physically and emotionally spent. They’ve been there for over four hours completing a task where they’ve watched compatriots (mostly) elect to skip. They’ve only seen one team complete the task and both are pretty sure that there’s a good chance that one of them will be eliminated (for coming last).

Instead of what you might expect — walling off from your opponent and trying to beat them — these two teams decide to work together! They begin to look for the missing clams — together! Eventually, one of the teams strikes proverbial gold — the missing clam. They get it checked and are free to move onto the next task. Except…

They don’t. They stay. The team declares that they want to stay and help the other team finish the task. That it’s the right thing to do.

The stakes here aren’t low, either. The winners of the Race collect a hefty sum ($500k combined), an around-the-world-trip, and a new SUV. Can you imagine imagine making the “kind” choice, under the circumstances of emotional and physical exhaustion, and a substantial reward? I think many of us hope we would.

The second team eventually presents a clam that’s cleared and then both teams head off to the next task together. On the way, one of the teams — the team that was ‘helped’ — figures out the solution to the next challenge. Upon arriving, the ‘helped’ team aids the ‘helping’ team (from the previous task) and then both teams are able to move onto the last step of this part of the Race. How wonderful that the kindness from only moments ago was immediately rewarded.

The two teams arrive at the last step and finish “tied” for second place. Remarkable.

Can you imagine working in an environment where your colleagues, instead of working their tail-feathers off to finish their work and get the heck out of the door, checked in on how you were doing to see if you might need some help to finish your work and then you both could leave an hour early (rather than s/he getting to leave an hour and forty-five minutes early?

Can you imagine living in a world where kindness was the norm? Nay, the expectation? A world where everyone held doors for each other. A world where everyone was willing to give up their seat on the bus. A world where everyone smiled at each other, no matter the colour on the skin of the person looking back at them. A world where the expectation was that people were always looking for ways to help you. A world where kindness reigns.

Guess Who’s Back, Back Again

/blows dust off the website

Well, hello there! It’s been awhile, eh? Nearly two years ago was the last time I pressed publish on something that went live on this site (not including the minor editing I did yesterday on the “about,” “contact,” and “disclaimer” pages).

In short, I’m back. I’ve been doing some introspection lately (i.e. what makes me happy, what do I enjoy doing, etc.) and one thing that kept coming to mind was the idea of writing. I used to write on a regular basis and I found it very positive (not just for the cathartic and therapeutic reasons), but because I felt it a way to clear out my head-space. That is, if I hold onto ideas, keep them in my head, it doesn’t allow room for new ideas to pop in. [Limiting belief system alert! Yes, yes, I know, there’s technically no limit to the number of ideas one can hold in their head, and I’m probably proof-positive of that. However, I’ve found that if I don’t express an idea, I tend to rehash it, so putting it out there on the proverbial page has proved very helpful.]

Right, so, writing. I’m going to aim to write something just about everyday (save for weekends, at least to start). As always, I’ll aim to try and write something in the same vein as the Blind Men and The Elephant — spotlighting a different perspective on an issue/matter.

For today, for right now, I’m just glad to be back. Glad to be putting words out there. Whether that’s shouting into the void or reaching you or someone you know on a very personal level. Either way, I’m glad that you’re here with me.

Learning to Say What You Mean: Parenting 101

I’ve been a parent now for a few years. In fact, I’ve been writing about Christine Gross-Loh’s book for nearly as long as I’ve been a parent. Certainly, there’s lots to learn about being a parent and lots that one can learn from being a parent. To date, there’s one salient lesson that stands above the rest: intentional speech.

Whenever I’m speaking to my kid (or any kid, for that matter), I’m always acutely aware of the words that are coming out of my mouth. For one, this little person is still learning the language, so it’s important that I be as precise as possibly can (within reason). In particular, I’m thinking about idioms.

If you’ve travelled to different parts of the country (or different parts of the world), undoubtedly you’ll have come across some phrases that might sound… odd. For instance, I bet you’ve probably let the frog out of your mouth on an occasion or two (Finnish idiom to say the wrong thing). Or when giving directions, has anyone ever told you that the place you’re trying to go is just a cat’s jump away from the museum (German idiom for something that’s not too far away). Or maybe, you and your friend are walking around a new part of town and your friend says to you, “I sense owls in the moss,” (Swedish idiom for finding/seeing something suspicious).

I could go on, but the point here is that cultures from around the world have created phrases to say something (when they really mean something else) and the same thing has happened in our culture. Have you ever done something at the drop of a hat or met someone who was all bark and no bite? Do you find your boss tends to beat around the bush or maybe sometimes add fuel to the fire? Have you ever wished that someone would break their leg?

Think about what these phrases might sound like to someone who’s just learning English. Break a leg. How rude. Or what about saying that to do something is a piece of cake? What the heck is that supposed to mean? Do I need to eat a piece of cake before you tell me how to drive to the airport or will there be cake at the airport?

For toddlers, it’s hard enough to learn how to maneuver one’s body and learn a “foreign language” (learning one’s first language is *kind of* like learning a foreign language, if you think about it). So, why would we compound the difficulty by simultaneously teaching them idioms? There’ll be plenty of time for them to learn how to feed the donkey sponge cake (thanks Portugal!).

Do You Have Ideas? Great, Write About Them

I came across a great post recently that you might say, is one of my guiding principles here — everyone should write:

You don’t talk about these ideas, even in your own head, because you’ve never put them into words. […] We’re all brimming with opinions on these topics that we may never discuss, even with ourselves. […] Writing crystallizes ideas in ways thinking on its own will never accomplish.

[…]

Sometimes writing is encouraging. You realize you understand a topic better than you thought. The process flushes out all kinds of other ideas you never knew you had hiding upstairs. Now you can apply those insights elsewhere.

Other times it’s painful. Forcing the logic of your thoughts into words can uncover the madness of your ideas. The holes. The flaws. The biases. Thinking “I want this job because it pays a lot of money” is bearable. Seeing the words on paper looks ridiculous. Things the mind tends to gloss over the pen tends to highlight.

These are exactly some of the reasons that I write. Sometimes, you think you have this grand idea and you’ve been carrying it around for months. You think, “if only they would have done it like this, things would be so much better.” However, when you take 10 minutes to sit down and try and write about this thought, you realize, there are a thousand different reasons why they didn’t do it the way you thought and are instead, doing it the way they are doing it.

Of course, there are also those times that when you do sit down to write, you realize that your idea is even better than you had originally thought. OK, maybe that doesn’t happen as often, but fleshing out ideas is an important step in the creative process.

Another important reason why I like to write is because I find that if I keep the ideas in my mind, they continue to swirl around. By extension, this doesn’t leave “room” for other ideas to float in. I know this is kind of limiting, but sometimes, I feel like I need to get the ideas out of my head and onto the page, so that new ideas can make their way in. It seems absurd that there is a “finite” amount of space inside one’s mind, but as it happens, we tend to think about the same stuff over and over. Additionally, I thought I had come across something from Einstein, Franklin, or one of the other famous creatives on the importance of writing everyday (for this very purpose), but I can’t seem to find it. Either way, that’s not what’s important. What’s important – writing something. So, get on with it – write something!

You don’t have to publish what you write, though I find that it helps me to be a bit more focused. If I know that there’s a chance that someone might read what I’m writing (eventually) I’m motivated to be at least a little polished. I know errors will still make it through, but on the whole, the meaning still gets through.

Alright, your turn…

Do Kids Move Back in with Parents Because They’re Trained to be Helpless: Parenting Without Borders, Part 10

In the Introduction, we broached the idea that the way other cultures parent might be more “right” than the way that the culture in North America parents, as discussed in the book Parenting Without Borders. In Part 1, we looked at some of the different cultural thoughts around sleep. There was also that stunning example of how it’s normal for babies in Scandinavia to be found taking a nap on the terrace in the dead of winter! In Part 2, we explored “stuff” and how having more of it might not be best for our children. In Part 3, we looked at how different cultures relate to food in the context of parenting. In Part 4, we looked at how saying “good job” to our little ones might not have the effect we think it does. In Part 5, we talked about the virtues of allowing our little ones the space to work through problems on their own. In Part 6, we examined the importance of unstructured “play.”In Part 7 and Part 8, we explored what education is like in East Asia and Finland. In Part 9, we looked at cultural notions of kindness in raising kids. In Part 10, we’ll explore the possibility that parenting might be fostering a sense of helplessness in children today.

Yes, the title of this post is a tad clickbait-y, but after reading the final chapter in Gross-Loh’s Parenting Without Borders, I can’t help but think that the reams of university students who’ve landed in their parents’ basements upon receiving their diplomas has something to do with the way they’ve been reared. Of course, there are many other factors at play (including things like the economy and recessions, etc.), but I don’t think that this ideas is too fantastical.

Remember the anecdote from Part 9: “In 1970, the primary goal stated by most college freshmen was to develop a meaningful life philosophy; in 2005, it was to become comfortably rich.” Well, there’s also a big difference in the way that kids are treated at home (even within a given country).

In 1950, an eleven-year-old growing up in a one-bedroom apartment in Brooklyn was responsible for waking up on time, making his own breakfast, and getting himself out the door. […] He also did the family shopping: going to a corner grocer to buy bread or rolls, or to pick up milk.

Contrast that with today’s America:

“I pretty much do all the chores in the house,” [says a mother of three pre-teens aged nine, eleven, and twelve].

According to the author of The Anthropology of Childhood, it’s “absolutely universal” for children to want to help adults in their communities. We think that sheltering kids from work will help them succeed in all those extracurriculars and allow them more time to complete all that homework. The issue here is that while kids want to help, we’re unintentionally squashing that motivation.

When we ignore our children’s eagerness to participate when they are younger, they internalize the idea that contributing is unimportant and they are helpless. They also begin to expect that things will be done for them.

This shouldn’t come as news to anyone who’s read the work of pediatrician Dr. Spock:

Chores, even if not perfectly done, help children gain good self-esteem and make them feel like they are contributing to the family.

And isn’t that what most people want for their kids, anyways? A well-developed sense of self-esteem and a healthy desire to contribute to the world around them? Simply asking children to do chores isn’t enough — it needs to be part of our expectations (or boundaries?). They key here is not necessarily that kids are learning how to contribute to the home, but that they’re learning to feel responsible for themselves. This fosters a sense of self-reliance, so that when they’re older, they know that they’ll be able to figure things out and maybe more importantly, that they’re responsible for figuring things out for themselves.

To illustrate the contrast in cultures, Gross-Loh shares a stunning example of a five-year-old in Japan [Emphasis Added]:

[They] prepare an entire meal for their parents at school and had them do everything by themselves, from paring the potatoes to cutting the meat and carrots for the stew with chef’s knives. Because the social expectation in Japan was that children were capable of acting responsibly and doing chores, the kids had daily practice in helping out at school. Our kids were getting clear and frequent messages about how highly and valued it was to be helpful, self-reliant, and responsible from just about everyone — teachers, friends’ parents, and even from their own friends.

How many parents in North America do you think would let their five-year-olds use a paring knife, much less a chef’s knife? Another poignant quote from the chapter: “When people talk only about what they’re protecting their kids from, they’re not thinking about what they’re depriving them of.” If we don’t give our little ones the chance to fail, how will they learn to succeed?

Brief related tangent — I came across a delightful article recently where a father’s daily question to his kids was, “What did you fail at today?” The idea behind it being that failure is a necessary part of growth.

~

Building on some of the points on autonomy and self-reliance in this chapter, Gross-Loh also explained the way we ask our children to do things matters. Think about how you like to be asked to do something. If someone is off-handedly demanding your attention while you (and they) are engaged in other tasks, are you interested in complying? Probably not. Now imagine you’re a 5-year old. Do you think you’d be more or less likely to comply?