ProductiviTree: Cultivating Efficiency, Harvesting Joy

ProductiviTree #8 Decode, Design, Deliver – Computational Thinking for Smarter Productivity with Eric Sandosham

Santiago Tacoronte Season 1 Episode 8

Summary

In this conversation, Eric Sandosham explains the concept of computational thinking, emphasizing its importance in problem-solving and productivity. He discusses how tools like Excel can enhance computational thinking skills and how these skills can be applied to everyday tasks, including email management. The conversation also explores the relationship between AI and computational thinking, arguing that both are necessary for effective communication and decision-making. Eric highlights the value of intuition in conjunction with computational thinking and provides practical steps for listeners to improve their computational thinking skills. 

Takeaways 

  • Computational thinking is a thought process for problem-solving. 
  • Excel is a powerful tool for developing computational thinking skills. 
  • Many people underutilize Excel's capabilities for data analysis. 
  • Decomposition helps break down complex problems into manageable parts. 
  • Pattern recognition allows for systematic problem-solving. 
  • Abstraction focuses on key attributes while filtering out noise. 
  • Algorithmic thinking involves creating repeatable processes. 
  • Computational thinking enhances productivity in daily tasks. 
  • AI requires clear communication and structured thinking. 
  • Learning to code is essential for non-technical professionals. 

Thanks for listening to ProductiviTree! If you enjoyed this episode, please subscribe and share.

🟢 Spotify: https://spoti.fi/4d17NpN

🟣 Apple Podcasts: https://podcasts.apple.com/us/podcast/productivitree-cultivating-efficiency-harvesting-joy/id1766690892

🟡 Amazon Music: https://amzn.to/3MILlXS

🔴 YouTube: https://bit.ly/ProductiviTree-Youtube

Connect with me:

Have questions or suggestions? Email us at info@santiagotacoronte.com

Eric, welcome to ProductiviTree Thank you. Thank you for having me. So computational thinking, it sounds technical, sounds geeky. Can you break it down in everyday terms? Sure, okay. think maybe it sounds geeky and I think it throws people off because of the word compute in there, or computational, and they then tend to associate it with computer science, right? And even in the title in your podcast suggested, or thinking like a computer, there are some aspects of it, yes, but actually it's a little bit different from thinking about computers, right? Computers are quite mechanical, so this is actually a... sort of human cognition skill and humans also compute, know, and we compute in different ways from computers sometimes. Now computational thinking can be, okay, if I look at Wikipedia, right, so the Wikipedia's explanation is quite nice and short. It says that, it's a thought process, right, how to formulate problems so their solutions can be represented as computational steps and algorithms. may still sound a little bit abstract. Personally, I tend to think of computational thinking in the following ways. Now, if we look at the space of, problem solving, there are two halves to it. There is the problem framing part, and then there is the solution articulation part. Computational thinking actually sits in the solution articulation space. And it's a way to work through a logical, efficient method to get solutions executed, as opposed to what am I looking at, what is the problem statement. Actually that's not really the space of computational thinking, much more in terms of the solution design element to it, and it's about decomposing a complex solution into reproducible computational steps. You have written that Excel is one of the Excel, Microsoft Excel is one of the best tools to develop computational thinking. Why? Why? Actually, Excel, I think when Excel, if you look at all the different tools out there, Excel has probably undergone the least change in terms of the fundamental structure, right? And then if you look at any executive, junior to senior, everyone sort of uses Excel and it speaks to the utility of it. On the surface, it seems like it's so plain and simple. It's just a canvas with grids, rows and columns. But the brilliance of Excel is actually it forces you because of the rows and columns, it gives you or sometimes forces you to think in intermediate deconstructive steps. So if you had to sort of, you know, compute like employee value, customer value, or you want to figure out some kind of permutations or classification of your data, Logically, think of, let's make some intermediate variables. You open either new columns or rows. Most people use it as columns. And then you write some simple formulas to clean up the data, to concatenate it. And then as you make more columns, you recombine some of these data into something much more complex. And even if you look at the embedded formula that's most used in Excel, the if-then statements, The if-then statements are actually very hierarchical and very linear. And that is one of the key elements of computational thinking, right? To be able to now break it down into a hierarchical sequence and then to execute each of those steps in a way in a very linear manner, right? So that open canvas style of Excel, in fact, sort of gets you into that frame of mind, how do I put things in buckets and in places? so that I can see how they relate to each other. You've been working in business analytics for decades. When did you notice that computational thinking was an underrated skill? When did you say, hmm, this thing is something that everybody should have as a skill? If it comes back to Excel, so a little bit myself, my background is actually an abstract mathematician. That was my bachelor. And when I mean abstract mathematics, it's really without computation. It's logic, oriented, right? I specialize in abstract algebra, in fact. And I started using Excel or became familiar with it actually when I first started work in the 90s. I think there was, at that point, Excel was already gaining traction. But when I saw how people used it, I realised, I was quite shocked. I worked in a bank and thinking everyone's computationally efficient. And I was quite shocked at how they were using Excel. They were actually calculating outside of the solution using calculators and then punching in the data or the results into Excel, using Excel like a ledger or a record keeper. rather than exploiting the value of Excel to break down or to recompute information and to link information up. And that's when I realized, my gosh, people are not using the tool that has all these abilities in it. And why do they not see it? Why do they still come back to calculating by hand, calculators, and then punching it as record keeping? Yeah, so that fascinated me. Let's break down computational thinking. You speak about decomposition, pattern recognition, abstraction and algorithmic thinking. Can you explain each one of them in layman terms? Okay, big words, decomposition. decomposition in some sense, suppose another familiar word may be deconstructing, but decomposing in a, speaking, is actually breaking down into basic, reusable or almost standardized blocks, right? And if you think about blocks, I mean, so if you think in the world of the physical analogy, Lego is a naturally decomposed sort of play thing because each block is the actual element and you build it up, right? So when you think of building a sort of a whatever the outcome you want to build it in Lego, you have to think in terms of the Lego bricks, which is that decomposed blocks. If you think about it in the world of software or coding, there are codes that sort of copy and paste, right? For go to GitHub and copy paste and all that. Those codes are sort of reusable and they come as chunks of scripts that can be inserted and reused in different places. So that would be a way to think about decomposition into those individual elements. Now pattern recognition is about seeing things that repeat, okay? And why are you looking at pattern recognition? Because ultimately when you want to compute, you want to be able to exploit the pattern so that it can iterate and it can repeat itself systematically. So you're looking for patterns that repeat, not one-off, and the patterns in some sense ideally can be broken into linear steps to be replicated. Think of the third term abstraction, again another big fancy full word, but ultimately it's about zooming in on key attributes of, let's say in this case, say a complex solution here trying to use computational thinking to improve it or to try to understand it. The abstraction is about trying to zoom in on key attributes and taking out all the stuff that's just... what we will call noise. it's the information signal versus noise. You're just zooming in on things that contain useful information signals, and the rest of it is not important. So as a simple example in a layman's term, when you meet someone and you meet a new acquaintance or a new friend, and let's say your spouse asks you, you had a great time meeting this new person, can you describe this person to me? you would probably talk about maybe the hair, the facial features, whether the person had short hair, long hair, a beard or not, the colour of his eyes. You may not talk about the length of his shirt or his trousers or what belt he was wearing, unless it was particularly useful, right? So the rest of it is noise in the way that doesn't really mean anything about the person. could. But we tend to now distill this into sort key essence. And so that's the idea of abstraction. We can get it wrong sometimes. We sometimes think it's noise, but actually it is information signal. The last one on algorithmic thinking, it's about iterative executionable instructions. We don't have to think about it like a computer, because you can think in the world of operations or manufacturing in the past when it was not automated and you rely on manual... human hands to get things done. They would have these things like SOPs, call it, right? You know, your procedural operating procedures. And in those operating procedures, you actually have those sequence, right? Do A to B to C, make sure it's fine. And then you can repeat the process again, rinse and repeat. That is a way of sort of breaking this out into algorithmic thinking. That's brilliant, Eric. But I have some news for you. The people that listen, the thousands of people that listen to this podcast are not interested in abstract mathematics. So how can computational thinking and these four blocks from decomposition to algorithm thinking make you more productive? Okay, so why... Okay, I mean this term computational thinking, think not everyone may have heard of it and people may have encountered it but may not have labeled it as such, right? Ultimately what we're trying to do with computational thinking is efficiency, right? Things that can... you can zoom in quickly, you can see it, you can repeat it, you can execute it. For a lay person, for example... why is computational thinking important has these few elements. So one, it gets into a solutioning design faster because you are able to, in the idea of abstraction, you are able to distill the key attributes and not get overwhelmed or get sidetracked by the noise. Two, the computational thinking, because of the decompositional and sort of algorithmic process, allows you to troubleshoot or you make your solution auditable, right? And we know solutions are not always perfect. And when you make them sequential or structured, you can go back and figure out where in the solution things did not work out in the way that you thought they would, right? So the ability to troubleshoot or even to hold yourself accountable to an audit becomes useful. And then lastly, With computational thinking, you're making things reusable. Because once you can articulate it in that manner, you can transfer that knowledge, transfer that solution to someone else, put it into instructions, and they can execute with a high level of fidelity and replication. When you say solution, I'm immediately going to the more IT-derived solution stuff Can you apply these methods to build your daily agenda, your daily habits, your email, your inbox management? How can you do this for very simple day tasks and improve your process, your flow with computational thinking? Okay, perfect question. In fact, you raised this thing email, right? So I was thinking about in your daily life, how would you apply it or where do people sometimes not do it correctly? I think email is a classic one, right? I mean, if you're deep in the corporate world and all that, most people, and particularly when people get more and more senior, their email box tend to be overwhelmed. I remember when I was working in the bank as a senior. I would get about 200 emails a day. Just one day. And the work I do was data. So imagine many of these emails had attachments with data to look at, and it was just not possible to go through 200 emails a day. The way people tend to think about emails is as and when it comes in, I respond to it quickly. There's a philosophy to that, right? Just in time, sort of. Just get it off your desk and then quickly get back to your work. Some people use email like we use YouTube. It's a distraction. They're doing a little bit of work and they feel a little bit tired. And let's go to the email box and look at what new emails have come in. And then sometimes they get carried away and never come back to the work. If we think about email, it's a real time sucking process to a day. Now, if you think about computational thinking, we can say, look, We know, I mean there's enough literature, know multitasking actually is not particularly productive. And the human mind actually doesn't truly multitask. We actually in some cases of the world need to get into deep thinking and into a flow. So being able to block the time out. So for example, say look will spend every 30 minutes, I will spend 10 minutes on email. Let's say I cannot block it and only do emails in the morning. That's fine. So every 30 minutes of work I will stop. Okay. spend 10 minutes of work on email. So if you chunk it out in sort of composable ways, decomposable ways, that time bounds it. And then when you look at that email, you can already set up even simple computational thinking rules. You can say, look, let's do an internal prioritization. And you can do it with most email, know, Outlook and most email application. You can set your internal rules and say anything that is two is a high priority. Anything that is CC is a secondary priority. Anything that's 2 with an attachment maybe is even a higher priority. Anything that's 2 and then from a certain group of people is even higher priority. And when you create that and you rank order that kind of your inbox, then you get the most important stuff done. If you're cc'd and you missed out and you didn't read, for example, because you're not expected to reply, there's no real penalty. You may be a little bit embarrassed when you go into a meeting and say, yeah, I didn't say sorry, I copied you but I didn't read it, that's fine. But if you didn't reply to a tool, particularly there's an attachment which means there's much more information to download, then that can be quite significant to have not replied. So I think we're just setting that sort of simple rules. as human beings, we can think of the rules quite intuitively and logically. Now putting that into a step. as rules into your email application would be a form of computational thinking. That's brilliant. Let's talk about the keyword of the decade. AI. So AI has a big promise, a little bit of hype also. And it's in fact making automation or is automating certain things and making our life easier. That's for sure. But you still argue that we need computational thinking. Why Eric? Why can't we just ask AI what we want or what we need. Okay, so maybe I'll start with a hypothetical future state. We don't know how AI is going to pan out, right? Everyone's talking about, you know, sentient AI and all of that. Who knows? It may or may not come, right? But ultimately, I think the golden grill that everyone is trying to go for is an AI that looks and feels and have the intelligence of at least a human being, right? And you can interact it in a in a very human-like manner. Let's say that some future state that we achieve, even if AI looks and feels and behaves like us in human terms, human to human, we still struggle to communicate. So imagine anyone who's ever had people reporting to them and giving instructions, which in your mind was absolutely clear. And then when they come back with the results, it's not what you thought. And you say, how did you get it wrong? I mean, it was so obvious, right? And it was not that they were cutting corners or lazy, they just understood it in a slightly different manner. And so human to human, we never even truly get communication perfectly right. We attempt to get it to near 90%, I think that's good, but it's never 100%. So imagine if the AI was like a human being, you would still have to be able to communicate in very precise terms, repeatable. reproducible terms for the AI to be able to do what you want like a regular human being and you delegate the work and they come back to you with the result. So this idea of AI being super intelligent at some point sort of removes the need for us to have computational thinking, I don't think so. Maybe less so, but not entirely so. But if we look at the state of AI today, The way computational thinking, again computational thinking, I want to be very clear, it's not about computer science and not about coding, it's about the structure, right? It's a cognitive process. You can think about the state of current AI with prom engineering, right? And you see DeepSeq had just come out from the Chinese, and we've got various different AI solutions from everyone. And what people love to do, let's try to do prom engineering and see whether we can make the AI break down. And they're so happy with that, right? wow, I brought it to his knees. And that from engineering is a form of computational thinking, because you're trying to see whether the computer can block the back door and, know, and it's algorithmic because you're trying to do it over and over again. Is it repeatable and reproducible? So from engineering is a form of high level coding because the human AI interface continues to evolve. Maybe it may not be prompt engineering in words, maybe it becomes speech. And if you say, well, maybe I put a microchip in my head, it becomes brainwaves and thoughts, but I will still have to structure it in a specific manner for the other recipient to able to take it on and act on it. So I don't see computational thinking as a principle, those four elements, know, going away, but the form of it, today we talk about computational thinking in terms like coding, right? That actual executional method may evolve to match the human AI interface. Hmm. see many business tools nowadays are completely hiding the computational part from it and making it almost conversational and super simple. You just need to message the AI and it will come back with the result. Is it a good thing that we're hiding everything, that we're creating a black box and hiding everything that happens behind the prompt? Or is it a bad thing? Okay, so good and bad. That's a loaded word. Is it a good thing? The is no, it's not a good thing. Is it a desired thing? Yes. It's like, do you want to eat highly processed food, sweet sugary stuff, drink too much Coke and drink too much coffee? answer is yes, we want to. But is it a good thing? Not necessarily. And the vendors are sort of feeding into the desire of human workers for taking, I suppose, I hate to use that word, taking shortcuts. And again, because it's not entirely the fault of the human being. mean, as all living creatures, we have to use energy, expand energy to get work done. And so it is in our interest as living things to be energy efficient. That's a nice way to say, some people are lazy, but we should be energy efficient. People tend to now distinguish what they feel is important, what's less important, and the non-important stuff, I just want to get an outcome and output. I don't really need to know all the working details as long as I can trust that output. I just want to get my job done. So it's not really the fault of the human worker to say, get something, hide it, hide that complexity for me. But when we think like that and say, everything is hidden, two things happen. So one, when you hide a lot of these op- of computational complexity or operating complexity in a solution, you also lose flexibility. Because now you only can interact and use that solution in very structured ways, because that's how they've sort of set it up for you, right? They've black box it and because you're unfamiliar how it works, you will tend to limit your interaction in very predefined ways because you don't know where the boundaries are. and you're uncomfortable if I deviate from there, maybe the output or outcome is not what I want. And that in itself then sort of weakens our ability of thinking and asking, but particularly if you think about the world of digital data, data is noisy and it can be biased, it can be incorrect, invalid, not representative and all of this. And if we disassociate too much, and don't see how that information gets processed or gets worked on, we then may take on false information or incorrect information and then use it for work. And today, many people, I would argue, have lost a little bit of their smell test. When they look at a piece of information, they don't see what's wrong with it. And of course, the boss says, look, it's wrong. They say, yeah, I missed it. Actually, they didn't miss it. They didn't see it at all. And in the past when people were much more hands-on with it, they could actually see that, there's something wrong with how this data is coming in. Something has gone wrong with the process or the upstream processes and all of that. So I feel, having seen it, people have lost a little bit of that smell test. By now we know that computational thinking is more about problem solving than becoming a computer. You have led large analytics teams. You have hired a lot of smart people. What is the biggest mistake companies make when hiring for problem solving roles? okay. I feel... Okay, so I come back to this thing I mentioned earlier on in the conversation about think about the world of problem solving. There are these two classes, the problem framing piece and the solution articulation piece. We tend to hire people for solutioning skills. Hmm in the data analytics, data science practice. So have you built a machine learning algorithm? Have you had exposure doing neural networks, those sort of things? But we under-emphasize on the problem framing part. So the solution design and solution articulation corresponds to a computational thinking. And you can test for that whether a person able to take a complex problem, you know, decom... and all of it great and that's what hackathons do essentially, right? But we put too much weight, feel, sometimes on this side because solution articulation presupposes that you've already understood the nature of the problem. And now of course you're just trying to find a better, more efficient, more optimized way to solve it. But too often in my experience, actually we have not understood the nature of the problem. We've understood a little bit of it superficially and we think that's all until we rush towards solutioning because the data scientists are predisposed to that skills and then they find that, it's not complete, it's sometimes irrelevant and then the business people get upset, they throw it back. And so a little bit of that weightage, I wish people could change it to focus a little bit more on the cognitive talents and skills on the problem framing side. Eric, for people that want to improve their comp- people that is listening this podcast episode and thinking, hmm, this sounds interesting, I want to improve my computational thinking today, what is the fastest way to start? It's a bit of a mindset shift because if people tend to take... I mean, if you look at the thing, most people want to jump to conclusions very quickly, and it doesn't mean the conclusions are necessarily wrong, but we want to sort of shortcut it and skip the steps. To be able to take something and break it into steps, decompose it, right? And say whether it's repeatable, is there a way to even make it faster and more efficient? a bit unnatural for most people. And I think one is of course I would encourage people to read because I think again the topic in itself feels a little bit abstract, elusive. The more they read and they read with the case studies and all that, I think it will start to register in their minds, actually it's not about computers, actually it's just a way to think, right? And of course then they can start putting to use some of this from a habit. perspective, right? The two things I would say they can start to strengthen in the four components that make up say computational thinking, right? The two things that they can start to focus on would be decomposing and abstraction. And why? So if we can think of big complex things in terms of smaller sort of identical blocks or components allows us to see the world sort of in terms of reproducible parts and then you can start to recognize, this also has similarity here, that also is similar because the decomposed elements are the same. It's just that when you build them up, you can make different pieces with it, right? And that, once you understand those kinds of things, the ability to transfer knowledge and process and working behaviors tend to be a bit easier because you recognize similarities once you have that decomposing state of mind. The other It is on attraction. So attraction basically in a simple term, how do I separate information signal from noise? if someone and you have met colleagues or if you've had that subordinates working with you and you ask them describe, you know, come to me with a problem, please describe it and you have some that will go round and round and round and they never get to the end. Whereas others would hit the nail on the head in that one single sentence. That ability to train yourself and say what is the essence here? what matters and the rest of it maybe is just context or just noise becomes important because again our minds are not trained to just sometimes just look at the key essence that we take everything in all the time. This question has been there for years and I think it's already embedded in some educational programs and the so-called STEM educations. Should non-technical professionals learn how to code or the basics of coding in order to acquire some of these, you know, decomposition, algorithmic, iterative thinking skills? 100 % yes. Okay, this speaking from a person, I put a caveat, speaking from a person who hated coding. So I'm not speaking from the point that, oh, I'm a computer scientist, I love computers and all that. My first exposure to computer was when I started work. Again, I was an abstract algebraist, so I didn't need computers. I don't even have a calculator when I was studying. Everything was just pure theory. And when I took a little bit of coding classes in high school and I hated it because it's just too technical and I can't seem to get it right. And frankly, maybe it wasn't so well taught. So I sort of lost interest in it. I picked up coding because of the nature of my work later in data analytics that I forced myself to then pick it up. What has dawned to me, if I look back, if we approach coding. not as a STEM subject because it sits under that space, right? And so there people who say, I'm not STEM trained or I don't like STEM topics, so it turns them off. Think of coding really as a language. Today, why would you learn any language? I say we're speaking in English is because I need to communicate with my fellow human beings in a digitalized world and increasingly so, we would need to communicate with computers. whether you like it or not, that's a fact. And so we have to learn a little bit of how the computer understands the language that could be as an intermediary for the interaction. So if we approach coding as language, I think people may appreciate it more because it is language, it has syntax, it has grammar, it has to be written in specific way just as you would write English or any other language. And I think learning coding is just a... other way to expand the language toolbox now with another entity that you are going to interact with anyway and I think that maybe helps to remove a little bit of that mental barrier to the task. How do we blend the uniqueness of humanity? I'm talking about intuition. This sixth sense that many humans have with computational thinking, because let's face it, Eric, there is something unique in humans. And sometimes you see someone that says like, my God, how could he, you can call it heuristics, right? Or whatever you want, but how this guy so quickly process this information, how he came with this strategy, that is how he jumped into this business idea, even though computation I thinking he was saying, hmm, it might not be a great idea. So how do we blend these two things? okay. So the first thing I would say is, so people use this term intuition. I feel in a very negative way today, given the rise of data and digital, we use negatively, right? And we say this gut instinct, gut feeling, like it's a bad thing to do. For me, intuition is data-driven. It is data-driven. The analogy perhaps, a way to understand how intuition works. I think we're all familiar now with the nature of AI and the underlying technique they call artificial neural networks. In case the readers may not be familiar, what it does is this, talk about say open AI, have trillions of parameters or billions of parameters. What are these parameters? These parameters are sort of just weights assigned to different small decomposed information signals, and these small little pieces of information signals either turn on, turn off, turn on, turn off, depending on the context. So each of them carries certain preference to turn on and off, and obviously if I have a billion or a trillion parameters to turn on, turn off, I have more flexibility to shape different kinds of outputs. So it's a very simple way to understand it. Intuition works pretty much like that, because... The whole artificial neural network in some sense is, you know, thought through or modeled after the human brain, not precisely, but elements of that. And the human brain works exactly like electrical signals. It turns on, turn off, turn off, turn off, right? And with small decomposed information signals. So intuition over time is when you're repeatedly exposed to a scenario, you learn to turn it on when you see it again, or you learn to turn it off when, you know, depending whether on and off works positively. And intuition comes, it doesn't arise from nothing, intuition comes because of repeated exposure. People who have good intuition, because they've lived those experiences, they've lived those lives, they're repeatedly exposed. So it's exactly how we are training the large language models. And if we accept that, then we have to accept intuition is also valid, right? And just like the large language models, things sometimes go out of whack and hallucinates, the signal turns on when it's not supposed to. just as well human beings, sometimes the intuition is wrong. But the intuition was derived from repeated exposure. Now, when I think about now this New World Data-Driven way of working, Daniel Kahneman, the very famous, who passed recently, a Nobel Prize winner, Daniel Kahneman on The Economist, right? And he had a book that he talked about thinking fast and slow, and then he introduced this idea of system one, system two thinking. In a way, intuition is system one thinking because it's already hard-coded. Whereas computational thinking in many ways is system two to slow down and reconsider. Now, you need both. I mean, if you had no intuition or no system one ability, you have no reactive speed. And there are many times we have to be able to react instantaneously to situations that evolve around us. So we do need that. But if we say that some of it where the impact or the outcome can be quite significant and we are not entirely sure or even we think we are sure but the outcome is still significant, yes, then let's slow it down a little bit. I know my gut tells me this. Fine. Now let's take a breath. Let's slow it down and see if I did a computational thinking approach. Do I also come to the same conclusion? If I do, great. If I don't, then why don't? Doesn't mean one is right or the other is not. But where is that difference arising? And I think it can be useful as a complement rather than a replacement to intuition. Rapid fire questions. Answer in less than 30 seconds. One book or resource to master computational thinking. Okay, actually there's so many books right, but I think the one everyone will probably relate to will be this book called Algorithms to Live By. I think it's probably one of the most cited books. It's non-technical in a sense because it's written for access to the layman. I think that probably would be a good way to enter the topic. Excel or Python? If you had to pick just one for professionals, which one? Excel. What's one everyday task that people do inefficiently because they don't think computationally? Email management. So we talked about that. What is the biggest misconception about data-driven decision-making? Okay, so that decisions are made by having the word data-driven decision-making, it presupposes that decisions are made without data. And that's not true. All decisions are made with data. It's just whether they are the right data or not. What is a common but stupid measurement companies rely on? Okay, I think you're referring to one of the other articles I was writing. Okay, I mean there are many sort of silly sort of measurements and I use the term stupid maybe just to provoke people's emotions but stupid have many many definitions of stupid right so whether it's sort of influence or is usable for decision making. whether it even represents the thing that you are trying to measure, is it calibrated. So these are the sort of three things I will look at. If it doesn't possess these three attributes, to me, a measurement is stupid. If you would need to give advice to people, before you said that you need to change your mindset, you need a bit of a mind shift if you wanna be more computational in your life, what is one habit, one simple habit that you will recommend people to start doing today? I think I would train to look at things in a decomposing manner. Because in some sense you can think about it as slowing down, stopping and smelling the flowers as they say. And so you notice the details rather than just take it in as a whole picture. Pay attention to how things are decomposable. and they are made of maybe similar stuff, similar elements, particularly even software, you look at say even the software interaction interface, actually they all have very much similar decomposable units, and they are sort of just reassembled differently. Yeah, I think I'm going to try this. I love cooking. I do cook a lot. And I think a lot of opportunities in cooking for decomposing and looking at things from a decomposed and modular perspective, perhaps. right. Yeah, so even the ingredients today, unfortunately, when we look at ingredients and cooking, they are all sort of already aggregated because you buy it off someone that's sort of put it together for you, right? Yeah. Eric, how can people get in touch with you and avail your services? LinkedIn would be the best way. I I write a sort of weekly article on things that annoy me in the practice of data analytics, data science. If they look me up, my name is quite unique in LinkedIn and I have a medium account under the same name. I also have a corporate website, so I run a, together with my partner, we started a consulting practice in data science. for the last 12 odd years and that's called Red and White Consulting and you can look at some also on the website here. Eric, thank you so much for your time. It has been an episode different in the context itself of your way of thinking and how you look at productivity from almost completely different and very abstract point of view. But I love how you managed to decompose it into small parts. So it's very digestible. Thank you so much for being with us today, Eric. Thank you very much for the opportunity.