Sunday, November 12, 2017

Why Do I Go to Conferences?

I find myself asking this question more often these days: why do I go to conferences? And in particular, why do I speak at conferences? And my answers vary, as I really don't know.

This week I spoke at Oredev, a developer conference, and felt totally disconnected and invisible. I did not talk to any new people. And new people did not talk to me. At first, I was quick to blame it on a tester identity, but it isn't that as I also identify as a polyglot programmer. I just did not have the chances for a discussion without first being active on it and even when I did, topics changed from tech to life. I listened to many sessions, some really great and others not so much, and came back with a decision on cutting down on conferences.

I used to get learning from conferences, but now my "being aware of techniques" learning quota feels full. Knowing of AWS, SAM, lambdas and step functions takes me somewhere, but the real application of those ideas takes me a lot further. And conferencing is threatening my time for practice.

My situation with this is not quite the usual one. I've been collecting the number of talks I do per year, and I already decided to cut down a year ago. Yet, looking at where I ended up isn't exactly showing the commitment: I have 27 sessions in 2017. 30 each year for 2016 and 2015. At this point of my life, talks emerge from my need of organizing my thoughts and driving my learning, and there are smaller time investments that would give me the same value.

So I wonder if people are finding pieces of joy, enlightenment, thoughts from whatever I end up sharing. Maybe that is worth the investment? There was one women I can thank for from Oredev that really made my day, coming to say one thing to me after my talk: "Thank you for speaking. It is so wonderful seeing women in tech conference stages." Most people say nothing, and pondering on this made me realize one of my speaking motivations is that that I crave for acceptance and acknowledgement.

Thinking a little further, I was thinking of the test conferences I find the most valuable for me: TestBashes. I've come back from those with new colleagues in the community to learn with, even friends. People I get to meet elsewhere, who bring so much joy into my life. But in particular, I remembered there is one accomplishment from each test bash that fills my heart with joy: I came back with a connection that created a new speaker.

Thank you Gita Malinovska, Bhagya Perera and Kate Paulk for making me feel like I had a role to play in the world seeing how awesome speakers you are. Gita and Bhagya I mentored in speaking after TestBashes brought us together, and they never really needed me but I needed them. Kate blew my mind with the discussions we had in TestBash Philly a year ago, when she seemed shy to take the stage, and I feel so proud seeing she delivered awesome in TestBash Philly this year.

There's a lot more names that I could drop that make me feel like I've served a purpose. But these three remind me that without going to conferences, our paths might not have crossed.

So I go to conferences for:

  • Collecting ideas that I need time to turn to actions at work
  • Making friends and maintaining our relationship
  • Encouraging awesome people to be the inspiration on stage they are off stage
I speak to make it cheaper to go. I speak in hope of recognition. I speak in hope of connection, as I have hard time initiating discussions. But most of all, I speak to sort out my own thoughts. 

What are your reasons? 



Monday, November 6, 2017

Making Everyone a Leader

A year ago, some people were fitting the "Leader" title for me in the context of the testing community, and I felt strongly about rejecting it. We have enough of self-appointed leaders, calling for followers and I felt - still feel - that we need to stop following and start finding our own paths, and just share enthusiastically.

Today, someone was fitting the "Leader" title on me in the context of work and our No Product Owner experiment. I was just as quick rejecting it, but this time realizing even more strongly that I believe that for good self-organizing teams, everyone needs to become a leader instead of one self-appointed or group-selected leader.

I believe our No Product Owner experiment will show its best sides if I manage to avoid being appointed the "leader". There will be things where people choose to follow me, like the idea of experimenting with something we thought is out of our reach, meeting people who "never have time" yet find time in three days when we ask, imagining the possible. But each one in my team will lead us on different things, I follow with joy. Today I followed one of my developers on them leading the way on solving customer problems and they could use my contribution. I followed another one of my developers in supporting him when he imagined a feature that we thought wasn't wanted that turned out to be the "dream we did not dare to hope for". I followed my two tester colleagues in solving puzzles around automation where recognizing an element (no Selenium involved) wasn't straightforward, and working together was beneficial.

Everyone has room to be a leader. We don't have to choose one. We can let one emerge for different themes.

And what makes a leader, anyway? It's the followers. I choose to follow leaders in training, enthusiastically. It does wonders to how my group works. Maybe it would do wonders on communities too? 

Thursday, November 2, 2017

Trawling for feedback

As a team without a product owner, we needed to figure out what is our idea of what someone with product management specialization could bring us? And we hit the discussion around the mysterious "customer voice".

At first we realized that having someone allocated as a "customer voice" with "decision power" (a product owner), isn't an automatic ticket for the team to hear any of that. So we ended up identifying a significant chunk of work the team isn't doing, would love done better and goes under the theme of trawling for feedback.

With customers in the millions, there's the feedback we get without them saying anything through monitoring. Knowing when they have troubles, knowing when they use features we think they must be using, all of that is primarily a privacy but then a technical challenge. Just the idea of going for no product owner made us amp up our ability to see without invading privacy. It was necessary before, but now it was our decision to include the tools that improve our ability to help our customers. Mental state does wonders, it seems.

Then there's the feedback that requires time and effort. Reading emails and replying them. Meeting people on specific topics and meeting people on general topic for serendipitous feedback to emerge. There's the skill of recognizing what feedback is critical. Being available for feedback, and identifying what of it is the core, not the noise. And passing the core to people who can do something about it - the team.

We realized there is a fascinating aspect of timing to this feedback trawling. Think of is as comparison to trawling for fish.

If you get a lot of fish without the proper storing and processing facilities, it will go bad and get wasted.

Feedback is similar - it is at its maximum power when fresh. That extra piece of motivation when you see the real person behind the feedback gets lost when we store the feedback for an appropriate time to act on it.

Having to deal with lots of fish at once creates costs without adding much value. 

While writing down a thing is a small cost, going through all the things we have written down, telling people of their status, and our ideas of their importance isn't a small cost anymore.

If you pass a fish from the fisherman to the second person in the chain, it is not as fresh anymore. 

First-hand delivered feedback to the developers just is different. It's more likely to be acted on the right way, with added innovation.







Tuesday, October 31, 2017

Starting a No Product Owner Experiment

In the usual two week cadence, our product owner showed up our team room looking exhausted. We were glad to hear it wasn't the fact that he came to do planning with us, but that the early hit of winter and snow and related tire change left him feeling beat up.

Our Product Owner usually sends us a ton of emails (forwarded bits he wanted us to react on), shows up regularly every two weeks, and whenever we ping him in between. The is not a part of our team and does not sit with us. He used to, but in an experiment to change the way the team communicated, was moved further away to make a huge positive impact. The developers who used to report to him changed reporting to the team, and we have not been the the same since.

We started going through the usual motions of how my team plans, listing things that we knew that needed addressing. I was making my own notes on a computer without sharing a screen, hoping someone else would step up and write stuff on our whiteboard like we always do, without success. The list was starting to get long, and yet another thing came up. The product owner spoke up: "I want you to prioritize this", cutting the discussion leading to understanding with the power voice. I could feel the energy sinking.

So I stepped in.

"Hey, this would seem like the time to introduce this thing we've been talking about for the last month or so with the product owner. We've agreed to try out an experiment with no product owner."

It wasn't the first time my team heard of it. It was definitely not the first time the product owner heard of it, as it was a stretch we had agreed on together.

I summarized the change in the context of this meeting that the team now has the control (and responsibility) of priority calls. We did not have one person who "wants us to do this", but we had a team that was set out to be customer obsessed and care enough to understand what would be our real options and the right choices.

With an agreement to agree on what PO used to do and what it really means to not have a product owner but a PMS (Product Management Specialist - the person is still around for helping), we continued planning with high energy.

There was a little bit of rebellion in the air. But everyone discussed things together, heard each other out and ended up with exactly the thing our ex-PO wanted us to prioritize - our route, feeling more energized.

The outcome of the rebellious planning seemed better than what I had come to expect. There was no passivity around "officially delivered items", where the real outcome of what would happen would end up very different. We talked enthusiastically about improving some of our core automations, agreed on pairs that could contribute on things, and prioritized our short term backlog.

My team does biweekly planning, but it is more of a biweekly backlog creation. And out of that list, we just work on items more in a kanban style.

My first lesson: it's not about what the change is, it's about changing. Trying something new out is energizing. Our real challenges of understanding what "No PO for three months" means are still ahead of us. 

Sunday, October 29, 2017

Yes, I am a manual tester and ...

In my team, we have two testers and our interests and contributions couldn't be more more different. While I like to refer to myself as an exploratory tester, most people around me think of me as a manual tester. I try very hard not to correct their terminology, but I often use the improv Yes and... rule to add to what others say.

Yes, I'm a manual tester and I create disposable automation.
Yes, I'm a manual tester and we just addressed some major issues that would have been next to impossible to spot without thinking hard around the product.
Yes, I'm a manual tester and while my hands are on the keyboard, the software as my external imagination speaks to me.

The practice of avoiding correcting people's established terminology is not "help to cheapen and demean the craft"(1). That practice is focusing on what matters, and what matters is us collaborating, creating awesome stuff.

I might not have always liked the terms manual and automated testing, but I can live with with established vocabulary. Instead of changing the vocabulary, I prefer changing people's perceptions. And the people who matter are not random people on twitter, but the ones I work with, create with, every office day.

Looking at the two testers in my team, I can very easily see why I'm the "manual tester" - I think best with hands on keyboard, using the product as my external imagination. I prefer to bias myself to experiencing much of the testing as the users would - using the product. Even when I test with code, I prefer to bias myself on using the APIs as users would. The mechanism of running the test - manual - leaves me time and focus to think well with the product giving me hints on where to go next.

Similarly, I can easily see why the other test is automated tester. Their focus is on getting a program, unattended to run the tests. They too think hard (often with less of an external imagination due to focusing on coding around a problem) while creating the script to run, and are all the time, with each test, forced to focus on the details. So their tool and approach of choice biases them to experience the product as a program can. The mechanism of running the test - automated, eventually - leaves them time to do more automated tests. Or rather, try to figure out why the ones created are failing, again.

Together, we're awesome. If we were more inclined to move between the roles of manual and automated tester, we'd be more awesome individually. But as it stands now, we have plenty of bugs to find that automation couldn't find: they are about aiming for the wrong scope. The person doing automation could find them, if all their energy wasn't going into the details of how hard automating a Windows GUI application can be. So right now we're lucky to have two people, with different focuses.

I wrote this inspired by this - Reference (1):

So here. I just cheapened and demeaned the craft. Except I didn't. The word policing is getting old. The next generation of manual testers can just show how awesome we are, giving up the chains of test cases and thinking well with our hands on they keyboard and our brains activated.

Imagine what would work be like if we stopped policing the word choices and approached communication with a yes and -attitude.

Wednesday, October 25, 2017

Your stack isn't as full as mine

Recently, I've seen increasing amount of discussion on "full-stack engineer" coming my way. Just as it was important to me at some point to be identifying clearly as a "tester", I have now colleagues who find similar passionate meaning around the "full-stack engineer".

Typically, one of these full-stack engineers works on both front-end and back-end. So on the web stack, typically they cover the two ends.

An engineer wouldn't be properly full-stack if the stack did not include both dev and ops. So being DevOps is clearly a part of it. 

Some of these full-stack engineers take particular pride in including testing (as artifact creation) into their stack of skills. They can deal with testing so that they are not dependent on testers, and more importantly, so that they feel safer making changes throughout the stack that is getting too big to keep in mind at once.

The first place where the fullness of the stack starts really breaking is when we face the customer. Can a full-stack engineer expect to get all their requirements cleanly sliced, or should they also have capabilities on understanding, collecting and prioritizing the customer's explicit and implicit wishes? This is usually where a full-stack developer throws responsibility over to a complete product owner, who magically has the right answers. A Complete Product Owner is the customer-side match for the Full-Stack Developer.

And for me, the idea of being Full-Stack Developer breaks in another way too. The web stack isn't always the full stack. For me it most definitely is less than half of the stack. The system created with the web stack is just as dependent on the system created on C++/Python -mix than the other way around.

So frankly, my dear full-stackers. Your stack isn't full enough yet. Time to move towards polyglot. Onwards, to the unicorn state.

*said on a day I had to look at C/C++, Python, Groovy, JavaScript/Angular, Java, CSS for work, and C# for hobbies. I feel slightly exhausted but certain it isn't going to change for anything but wider.

Saturday, October 21, 2017

How is European Testing Conference Different?

One sunny afternoon in San Diego over three years ago, I took a call with Adi Bolboaca. That call has since it happened defined a lot of what my "hobbies" are since (conference organizing) but also set an example of how I deal with things in general. From idea to schedule that call was all it took. We decided to start a conference.

The Conference was named European Testing Conference to reflect its vision: we were building the go-to testing conference in Europe and we'd take on the challenge of having the conference travel. In the three edition so far, we work with Bucharest (Romania), Helsinki (Finland) and Amsterdam (Netherlands).

As the Amsterdam Edition is well on its way to take place in February 19-20th 2018, someone asked how we position ourselves - how is European Testing Conference different?

Testing, not testers

Our organizers are an equal mix of people who identify as tester and programmers. What brings us together is the interest to testing. The conference looks at testing as different roles do it and seeks to emphasize the collaboration of different perspectives in making awesome products. We like to think of testing as Elisabeth Hendrickson said it: it is too important to be left just for the specialized testers. Our abilities to solve the puzzles around feedback make a difference for quality, speed of delivery and long-term satisfaction for those of us who build the software.

Practical focus

We seek to make space for sessions that are practical, meaning they are more on the what and how as opposed to why, and they are more on the patterns and practices. We start with the idea that testing is important and necessary, and seek to raise the bar in how testing is done.

Enabling peer learning

We know that best sessions in conferences with regards to learning often happen in the hallway track where people are in control of the discussions they engage in. Many conferences formalize hallway track to happen on the side. We formalize hallway track sessions to be a part of the program so that we increase the chances of everyone going home with a great, actionable learning from peers.

Peer learning happens with interactive sessions that have just enough structure so that you don't have to be a superb networker, you can just go with the flow. As a matter of fact, we don't give you choice of passively sitting listening to a talk when you could learn from your peers in an interactive format, so these session are always conference wide.

The do three different kinds of interactive sessions:

  • Speed Meet makes you go through people while giving the structure to ensure that it's not the usual chit chat of me introducing myself, it is learner driven what the introducer gets to share. Each participant creates a mind map, and the person you get to know will drive the discussion based on what they select on your map. 
  • Lean Coffee is a a chance of discussing testing topics of the whole group's choice. Regardless of its name, it is more about discussions and less about coffee. We invite our speakers to facilitate tables of discussions, so this is also your chance of digging in deeper to any of the topics close to heart of our speakers. 
  • Open Space makes everyone a speaker. A good way to prepare for this is to think about what kinds of topics you'd love to discuss or what knowledge you'd like to share. You get to propose sessions, and they could also be on topics you know little of but want to learn more about. 

Lean Coffee and Open Space are regular sessions in conferences, but we have not seen anyone else do them as part of the day program, whole conference wide. You will meet people in this conference, not just listen to the speakers we selected.

Schedule by session types

Interactive sessions have no talk sessions to listen to passively at the same time. Similarly, when talk sessions take place, we have four of them scheduled on tracks. We also have in-conference workshops, and again when it's time to workshop, there's no talk sessions available simultaneously. This is to encourage a mix of ways of learning. It's hard enough to select which topic to go for, and if the session type is also a variable, it just gets harder to get the learning mix right.

Speakers selected on speaking their stories

All speakers we have selected have been through a collaborative selection process. This means that we did not select them based on what they wrote and promised they could talk on, we had a chat with each and every speaker and know how they speak. We're hyped about the contents they have to share as part of our great program.

Some of the talks are not ones the speakers submitted. When collaborating with a speaker, sometimes you learn that they have something great to share that they did not themselves realize they should have submitted.

Track speakers are keynote quality

We take pride in treating our speakers fair, meaning we guarantee them that they don't have to pay to speak but we compensate the direct costs of joining our conference. We go a bit further, sharing profits with the speakers. This means that the speakers are awesome. They are not traveling to speak with the vendor marketing budget to sell a tool or service, but are practitioners and great speakers.

Enabling paired sessions

Our program has sessions with two speakers, and when we select a session like that, we pay the expenses of both the speakers. While we strongly believe that a two person talk is not a talk where two people take turns on delivering a talk one could deliver, we actively identify lessons that require two people. We pair in software development, we should be able to pair with our talks too.

Organized by testing practitioners

Our big fancy team is a team of practitioners doing the conference as a hobby. We love learning together, creating together and making a great program of testing available together. We spend our days testing and programming. We know what the day to day challenges are and what we need to learn. Our practitioner background is a foundation for our ability to select the right contents.

Traveling around Europe

Europe is diverse area, and we travel around to connect with many local communities. It sometimes feels ambitious of us, as every year we have a new community to find and connect with to sell our tickets. Yet, going to places and taking awesome content to places is what builds as forward as a bigger community. 

We love other testing conferences

We don't believe that the field of testing conferences is full - there's so many people to reach and enable to join the learning in conferences. If your content and schedules are not right for you, we encourage you to look at the other. We love in particular conferences that enable speakers without commercial interest by paying their expenses and often give a shout out to TestBashes (all of them!), Agile Testing Days (both Germany and USA), and are delighted to be able to mention also Nordic Testing Days, Copenhagen Context and Romanian Testing Conference. 




Friday, October 20, 2017

Sharing while minding the price of shame

Some weeks ago, I was sitting at the London Heathrow airport, with a little post-it note in front of me saying "review, write, finalize". I had three separate writing assignments waiting for dealing with, and what would be a better place to deal with writing than being stuck at airport or a plane. I started with the review, to an article that is now posted and still causing buzz on my twitter feed: Cassandra Leung's account of power misuse in the testing community by Mr Creep.

Reading it made me immediately realize I knew who Mr Creep was, and that I had tolerated his inappropriateness in a different scale just so that I could speak at his conferences. I knew that with who I was now, I was safe. I had the privilege of thinking his behavior was disgusting and yet tolerating it. Reading the experience through the eyes of someone with less privilege was painful.

I could do something. So I talked to this Mr Creep's boss. They did all the rest. Shortly after, I saw this Mr Creep changing his status for in LinkedIn to Looking for new opportunities. I can only hope that the consequences of his own actions would make him realize how inappropriate they were. And that he would learn new ways of thinking and acting. At least, he no longer is in this position of power. He's still an international speaker. Hopefully he is no longer the person who thinks this slide is funny:


This slide happened years ago. We did not realize what it could mean for the women in the community. While the source of this is this Mr Creep, there's other creepy "funny" slides with exclusive impact. Don't be Mr Creep when you present. 

The article that started this for me came out later. At time of publishing, the proper reaction from the conference was already a fact, making this article very special amongst all the accounts of creepy behavior. This Mr Creep remains unnamed. And I believe that is how it should be. In the era where a lot of our easy access to power comes through social media, there are kinder forms of displaying power and expressing inappropriateness. Bullying the bully or harassing the harasser are not real options. For a powerful message on the price of shame, listen to a TED talk by Monica Lewinsky.

Calls like this are also common:
Outing him could be necessary if we didn't know he was already addressed. But all too late.

There's been a number of women who have also come forward (naming in private) with their own experiences with Mr Creep in the testing community, some with exact same pattern. Others shared how they always said no to conference speaking in places associated with him. And the message of how unsafe even one Mr Creep could make things for women became more pronounced.

In the last days of me thinking of writing this article, my motives were questioned: maybe I just want to claim the credit for action? But there is a bigger reason that won't leave me alone before I write this:

I need to let other women know that they have the power to make a difference. When it appears that organizations will prioritize their own, sometimes they prioritize their community. They need someone to come forward. If you've been the victim, you don't need to come forward alone. Coming forward via proxy is what we started with here. And after creating the feeling of safety, we brought down the proxy structure giving power where it belongs - back with the victim.

I need to let other women know that the conference we've talked about in hushed voices has chances of again being a safe place for us to speak at.

I need to let the everyone know that seeing all male lineups may mean that all the women chose to stay safe and not go.

I was in a position of privilege to take the message forward. I was an invited international speaker, with an open invitation to future conferences I was ready to drop. I had a platform that gave me power I don't always have. But most of all, I couldn't let this be. I had to see our options.

While what I did was one discussion of someone else's experience, it drained me. It left me in a place where I couldn't speak of my experience as part of this. It left me with guilt, second guessing if other people's choices of boycotting would really have been an option. It left me with fear that Mr Creep targets his upset on me (haven't seen that so far). But most of all, it fill me with regrets as I now know that I could have made choices of addressing the problem a lot earlier.

Mr Creep had to hurt one more person before I was ready to step up. Mr Creep got to exist while I had something I could personally lose on outing him or confronting him on any of his behaviors.

I need to write this article to move forward, and start my own recovery. This Mr Creep is one person, and there's many more like that around. Let's just calling out inappropriateness while considering the appropriate channels. 

Monday, October 16, 2017

Innocent until proven guilty

I read a post Four Episodes of Sexism at Tech Community Events, and How I Came Out of the (Eventually) Positive and while all the accounts are all too familiar, there is one aspect that I feel strongly about. Story #3 recounts:
It takes me two years to muster the confidence to go to another tech event.
The lesson here is that it is ok to remove yourself from situations where you don't feel comfortable. There is a very real option for many people that we don't show up because someone can make us feel uncomfortable in ways that matter.

I hate the ways people report being made feel uncomfortable. And I particularly hate when someone reports a case where they were made uncomfortable being dismissed or belittled by the organizers of conferences because there is a belief that the "offenses" are universally comparable. That alleged perpetrators are always innocent until proven guilty. This idea is what makes people, word against word in positions of unequal power, allow for the bad behaviors to continue.

There will not be clear cut rules of what you can and cannot do in conferences to keep the space safe. Generally speaking, it is usually better to err on the side of safe. So if you meet someone you like beyond professional interests in a professional conference, not expressing the interest is on the safe side.

Some years ago, I was in a conference where someone left half-way though the conference for someone else's bad behavior. I have no clue what the bad behavior was, and yet I side with the victim. For me, it is better to err on the side of safe again, and in professional context reports like this don't get made lightly. Making false claims is not the common way of getting rid of people, even if that gets recounted with innocent until proven guilty.

We will need to figure out good ways of mediating issues. Should a sexist remark cost you a place in the conference you've paid for - I think yes. Should a private conversation making others overhearing it cost you a place in the conference you've paid for - I think yes. On some occasions, an apology could be enough of a remediation, but sometimes protecting the person who was made feel unsafe takes priority and people misbehaving don't have the automatic access to knowing who to get back to for potential retaliation. It's a hard balance.

The shit people say leave their marks. I try not to actively think of my experiences, even forget them. I don't want to remember how often saying no to a personal advance has meant losing access to a professional resource. I don't want to remember how I've been advised on clothing to wear while speaking in public. I don't want to remember how my mistakes have been attributed to whole of my gender. There's just so much I don't want to remember.

Consequences of bad behaviors can be severe. Maybe you get kicked out of 2000 euro conference. Maybe you get fired from the job. Maybe you get publicly shamed. Maybe you lose a bunch of customers.

Maybe you learn and change. And if you do, hopefully people acknowledge the growth and change.

If you don't learn and change, perhaps excluding one to not exclude others is the right thing to do.

In professional settings we don't usually address litigation, just consequences of actions and actions to create safer spaces. Maybe that means taking the person stepping forward feeling offended seriously, even when there is no proof of guilt.

I don't want people reporting years of mustering the confidence to join the communities again. And even worse, many people reporting they never joined the communities again, leaving the whole industry. I find myself regularly in the verge of that. Choosing self-protection. Choosing the right to feel comfortable instead of being continuously attacked. And I'm a fighter. 

Saturday, October 14, 2017

Caring for Credit

Last three years have been a major source of introspection, as I've taken upon the journey of becoming (more) comfortable with pairing and mobbing. Every time someone hints that I want to pair to "avoid doing my own work", I flinch. I hear the remark echoing in my head, emphasizing my own uncertainties and experiences. Yet, I know we do better together and fixing problems as they are born is better than fixing them afterwards. 

A big part of the way I feel is the way I was taught to feel while at university. As one of the 2% women minority studying computer science, I still remember the internal dialogue I had to go through. The damage the few individuals telling me that I probably "smiled my way through programming classes", making me avoid group work and need of proving my contribution in a group being anything more than just smiling. And I remember how those experiences enforced the individual contributor in me. Being a woman was different and I took every precaution I could to be awesome as much by myself as I could. If I ever did not care for doing more than others, that would immediately backfire. And even if I did care, I could still hear the remarks. I cared then, I still do. And I wish I wouldn't. 

My professional tester identity is a way of caring for credit. It says about what of all the things I do are so special that I would like it to be separately identified. It isn't keeping me in a box that makes me do just testing, but it says that that is where I shine. That is where I contribute the most of my proud moments. Yet it says that I'm somehow a service provider, probably not in the limelight of getting credit, and I often go back to liking the phrase:
Best ideas wins when you care about the work over credit.
I want to create awesome software, and be recognized for my contributions to it.Yet I know that my need of recognition is a way of not getting the best ideas to win - nor anyone else need of recognition.

As a woman, attribution need can get particularly powerful. If you're asked of the great names in software, most people don't start listing women - even if that has recently changed in my bubble that talks about awesome people like Ada Lovelace, Margaret Hamilton, and Grace Hopper. And looking a little beyond into science, listing women becomes even less of a thing.

The one man we generally tend to think of first in science is Einstein. Recently I learned that he had a wife, who was also a physicist and contributed significantly to his groundbreaking science. He did not raise her significant contributions to general public.  Meanwhile, Marie Curie is another name we'd recognize and the reason recognition is tied to her is due to her (male) colleagues actively attributing work to her. 

Things worth mentioning are usually a result of group work, yet we tend to attribute them to individuals. When we eat a delicious cake, we can't say if it was great because of the sugar, the eggs or the butter. All were needed for the cake to become what it is. Yet in creating software products, we tend to have one ingredient (programming) take all the credit. Does attribution matter then? 

It matters when someone touts "no women of merit" just for not recognizing the merited woman around them. It matters when people's contributions are assessed. Reading a recent research saying that women researchers must publish without men to get attributed and thus tenure made me realize how much the world I was taught in school still exists. 

People are inherently attribution seeking - we wish to be remembered, to leave our mark, to make a difference. A great example of this is the consideration of why there are no baby dinosaurs - leading to a realization that 5/12 identified species are actually just adolescent versions of the adult counterparts. 

From all of my talks, the bit that always goes viral is adaptation of James Bach's saying: 
I've lived this for years and years, and built a whole story of illusions I've broken, driving my tester identity through illusion identification. Yet, I will always be the person popularizing past sayings. 

Caring for credit, in my experience, does more harm than good. But that is what humanity is built around. Take this as a call of actively sharing the credit, even when you feel a big part of credit should belong to you. We build stuff together. And we're better off together. 

Friday, October 13, 2017

What a Product Owner Does?

As an awesome tester, I find myself very often pairing with a product owner on figuring out how to fix our ways of working so that we could have best chances of success when we discover the features while delivering them. My experience has been that while a lot of the automation-focused people pair testers up with developers, the pairing on risk and feedback with the product owner can be just as (if not more) relevant.

Over the years, I've spent my fair share shaping up my skills of seeing illusions on a business perspective, and dispelling them hopefully before they do too much damage. Learning to write and speak persuasively is part of that. I've read tons of business books and articles, and find that lessons learned from those are a core to what I still do as a tester.

I find that a good high-level outline of the skills areas I've worked on is available with the Complete Product Owner poster. Everything a product owner needs to know is useful in providing testing services.
Being a Product Owner sure is a lot of work! - William Gill

In preparation of a "No Product Owner" experiment, I made a list of some of my expectations on what a product owner might do (together with the team).

What a Product Owner Does?
  • has and conveys a product vision
  • maintains (creates and grooms) a product backlog - the visible (short) and the invisible (all wishes)
  • represents a solid understanding of the user, the market, the competition and future trends
  • allows finishing things started at least for a preagreed time-box
  • splits large stories to smaller value deliveries
  • prepares stories to development ready (discovery work)
  • communicates the product status and future to other stakeholders
  • engages real customers and acts as their proxy
  • calculates ROI before and after delivery to learn about business priorities
  • accepts or rejects the work done
  • maintains conceptual and technical integrity of the product 
  • defines story acceptance criteria
  • does release planning
  • shares insights about the product to stakeholders through e.g. demos 
  • identifies and coordinates pieces to overall customer value from other teams
  • ensures delivery of product (software + documentation + services) 
  • responds to stakeholder feedback on wishes of future improvements and necessary fixes
  • explains the product to stakeholders and drives improvement of documentation that enables support to deal with questions
These people are busy, and can use help. How much of your testing is making the life of a product owner easier? 

Thursday, October 12, 2017

Run the code samples in docs

I was preparing for a session on exploratory testing in a conference, wanting to make a point of how testing an API is just like testing a text field. Even the IDE you use just gives you keyword recognition based on just a few letters, and whatever values you pass in are a guided activity. The thinking around those fields is what matters. And as I was preparing, I went to my favorite test API and was delighted to notice that since the public testing sessions pain, there was now some basic getting started documentation with examples.

I copypasted an example around approving arrays into the IDE, and things did not go as I would have expected. Compiler was giving me errors, and I could extend my energy barely to see what the error message was about. There were simple errors:
  • a line of code was missing semicolon
  • a variable a was introduced, yet when it was used it got renamed to x
As a proper tester, I was more happy than sad with the lack of quality in the documentation, and caused a bit of pain to the poor developer asking not to fix in for a few hours so that I could have other testers also see how easy finding problems in a system is because documentation is part of your system. I think the example worked nicely around encouraging anyone to explore an API with its documentation.

The cause of the problem I saw was that the code sample was never really executed. And over time even if it was executed once, it could break with changes as it wasn't always executed with the code.

A lot of times, we think of (unit) tests as executable documentation. They stay true to functionality if we keep on making them pass as we change the software. Tests work well to document drivers. But for documenting frameworks, you need examples of how it calls you. It makes sense to do the examples so that you can run them - whether they are tests or other form of documentation.

Documentation is part of your API. Most of us like to start with an example. And most of us choose something else if possible if your documentation isn't helpful. Keep it running.

Calling bs on the pipeline problem

Yesterday was a day of Women in Tech in Finland. After the appropriate celebrations and talks, today my feeds are filled with articles and comments around the pipeline problem. I feel exhausted.

The article I read around getting girls into the industry quotes 23% of women in the industry now. Yet, look at the numbers in relevant business and technical leadership positions. One woman among 5-6 people in the leadership groups isn't 23%. No women as head technical architects is even further from 23%. And don't start telling that there are no women of merit. Of course there are. You might just not pay attention.

In the last week, I've personally been challenged with two things that eat away my focus of being amazing and technical.

First of all, I was dodging a "promotion" into dead end middle management position. How would that ever make me a head technical architect I aspire to be? Yes, women with emotional intelligence make strong managers. But we also make excellent technical leaders.

Second, I was involved in a harrassment getting someone fired case in the community. It has been extremely energy draining even if I was just in a support role.

Maybe having to deal with so much of the extra emotional labor is what makes some people think again less of my merits. And I'm getting tired of it.

We talk of the pipeline problem, on how little girls don't take interest in computers and programming. If they look forward into their role models, they see women fighting for their advancement and mere existence. The pipeline leaks, and almost everyone who is in it is regularly considering exit just to get rid of the attitudes the ones with more privilege don't have to deal with.

How about improving things for the future generations on focusing on the current one so that we can honestly tell our little girls that this is the industry worth going for? It is the industry of the future, and we're not going to leave it, but a little bit of support for the underdogs would be nice.

When I do keynotes in conferences, I get the questions of  "are you here to watch your kids while your husband speaks" from the other female keynoter's husband. I get the questions of  "you're one of the organizers" when most of the organizers are women. And yet in the same places I get men telling that there is no difference in how we are treated.

Just pick 50% of women of potential into the relevant groups we all want to reach. Those 50% of women are not going to be worse than the men you're selecting now. Those positions help them realize their full potential. And showing this industry is more equal might just help with the beginning of the pipeline too. Because the little girls don't only have a dad who makes sure they get interested in math and STEM, they have a mom who could be more too.

Sunday, October 1, 2017

Machine Learning knows it better: I’m a 29-year old man

Machine Learning (ML) and Artificial Intelligence (AI) are the hit theme in testing conferences now. And there’s no entry criteria on who gets to speak on how awful or great that is for the future of testing, existence of testers and the test automation we are about to create. I’m no different. I have absolutely no credentials other than fascination to speak on it. 

The better talks (to me - always the relative rule) are ones where we look into how to decompose testing problems around these systems. If the input to the algorithm making decisions changes over time, the ideas we have on deterministic testing for same inputs just won’t make sense unless we control the input data. And if we do, we are crippling what the system could have learned, and allowing it to provide worse results. So we have a moving target on the system level.

My fascination to these systems has lead me to play with some available. 

I spent a few hours on Microsoft Azure cognitive services API for sentiment analysis inspired by a talk. As usual, I had people work with me on the problem of testing, and was fascinated on how different people modeled the problem. I had programmers who pretty much refused to spend time testing without getting a spec of the algorithm they could test against. I had testers who quickly built models of independent and dependent variables in input and output, and dug in deeper to see if their hypotheses would hold, designing a test after another to understand the system. Seeing people work to figure out if the teaching data set is fixed or growing through use was fascinating. And I had testers who couldn’t care less on how it was built but focused on whether it would be useful and valuable given different data. 

I also invested a moment of my time to learn that I’m a 29-year old man based on my twitter feed. This result was from University of Cambridge Psychometric Centre’s service http://applymagicsauce.com. The result is obviously off, and just a reminder on the idea that “6 million volunteers” isn’t enough to provide an unbiased data set a system like this would learn from. And 45 peer-reviewed scientific articles add only the “how interesting” to the reliablility of the results. 

My concern on ML/AI isn’t on whether it will replace testers. Just for the arguments sake, I believe it will. Since I started mob testing with groups, my perspective into how well testers actually perform in the large has taken a steep dip, and I base my hope for the testers future usefulness in their interests to learn, not on what they perform today. The “higher function thinking” in testing exists, but is more rare than the official propaganda suggests. And the basic things won’t be that impossible to automate. 

My concern on ML/AI is that people suck. With this I mean that people do bad things given the opportunities. And with ML/AI systems, as the data learned over time changes the system, we can corrupt it both in the creation of the original model and in using it while it is learning. We need to fix people first.

The people with time and skills use their time sometimes on awful problem domains. There must be a great idea other than “we could do this” to create a system that quite reliably guesses people’s sexual orientation from pictures, opening a huge abuse vector. 

The people who get to feed data into the learning systems take joy in making the systems misbehave, and not in the ways we would think testing does. Given a learning chat bot, people first teach it to swear, spout out racist and sexist slurs and force the creators to shut them down. 

Even is the data feeding was openly available for manipulation, these systems tend to multiply our invisible biases. The problems I find fascinating are focused around how we in the tech industry will first learn about the biases (creating systems in diverse groups, hint hint), fix them in ourselves to a reasonable extent and then turn the learnings into systems that are actually useful without significant abuse vectors. 


So if there is a reason why the tester role remains is that figuring out these problems requires brain capacity. We could start by figuring out minorities and representation, kindness and consideration without ML/AI. That will make us better equipped for the problems the new systems introduce. 

Saturday, September 30, 2017

Time to update threat modeling

Working in a security company, there is an activity we try to routinely do, at least when anyone hints on not having done it. That activity is security threat modeling. Getting smart people together, supported by a heuristic (STRIDE) we explore what could possibly go wrong on security with the design we’ve made by now. Purpose of threat modeling is to learn and change when we learn. And for someone trying to drive forward better quality, there’s not a more powerful argument than connecting a bug somehow to security. 

Heuristics are used to keep the discussion both focused and multifaceted. People easily would focus on one type of problems, and heuristics like STRIDE help with thinking from a few more perspectives. It’s far from complete, but it has been the basic approach for good enough to get started with. The acronym opens up to words Spoofing, Tampering, Repudiation, Information disclosure, Denial of service and Elevation of privilege. 

Security is about not letting people do bad stuff with systems, hopefully while allowing the right people to do what they were trying to achieve with the use of the system. All of the perspectives easily map to the idea of the users and attackers. 

But with many modern systems, there is one often dismissed theme I would bundle with security, while I realize that for me as a tester it has long been a separate concern. That is one of abuse. I’m exploring extending STRIDE with an A. 

Abuse vectors are often unique. They are ideas of how we could unintentionally open up ways for targeting misbehaviors against a group with use of the systems. And thinking of the abuse threats is getting increasingly important.

Let’s explore a few ideas around this.

A prominent tech woman ended up with targeted abuse with Github. At a time when Github allowed people to be added to projects without their consent someone thought it was a fun thing to add her to projects she would by no means associate with. All those projects then end up being a part of her profile. We would want to make sure our features don’t enable high visibility cyber bullying. 

A group of people built a learning bot, which turned into a monster in a matter of hours. We would want to make sure that with the learning systems, we can control the quality of the data the system learns from.

A face recognition software was built, and it did not recognize faces of people of color. The sample set the system was built on did not do a good job at being representative, and the biases of the creators got coded into the functionality. We would want to make sure we don’t make a group of people invisible with systems intended for wide use. 

A phone had a feature of facial recognition for logging in. It worked perfectly for images of the owner’s face. We would want to make sure that if we use faces as credentials, gaining access to our personal data is not one picture away. 


Abuse as an attack vector is connected with STRIDE, but different. And we need to start actively thinking of it as we create more sophisticated systems that include our assumptions of privilege and biases. 

Tuesday, September 26, 2017

Mob Programming on a Robot Framework Test

A year ago as I joined my current place of work, I eagerly volunteered to share on Mob Programming in a session. As I shared, I told people across team lines that I would love to do more of this thing here. It took nearly a year for my call to action to sink in.

A few weeks back, a team lead from a completely different business line and product than I work on pinged me on our messaging system. They had heard that I did this thing called Mob Programming, and someone in the team was convinced enough that they should try it, so what then should be the practical steps? They knew they wanted to mob on test automation, and expressed that a challenge they were facing was that there was one who pretty much did all the stuff for that, and sharing the work in team was not as straightforward as one might wish for.  Talking with three members of the team including the team lead and whoever had drawn me in, we agreed on a time box of two hours. For setup, one would bring in the test setup (which was a little more than a computer as we were testing a Router Box) and ideas of some tests we could be adding.

It took a little convincing (not hard though) to get the original three people to trust me with the idea of not having first a session of teaching and introducing the test automation setup, and that we would just dive in. And even if we agreed on that in advance, the temptation of explaining bits that were not in the immediate focus of what the task drove us towards was too much to resist. 

The invitation for the team of 7 included “test automation workshop”, without a mention of mechanism we would be using on this. And as we got to work on adding test, I asked to figure out who knew the least and made them sit in front of the keyboard first, just saying the rule of “no thinking on the keyboard”. I also told them we’d be rotating on 3 minutes, so they would all get their chance on the keyboard and off, except for the one automation expert. Their rule was to refrain from navigating unless others did not know (count to three before you speak). 

Looking at the group work for 1,5 hours was a delight. I kept track of rotation, and stepped into facilitating only if the discussing got out of hand and selecting between options was too hard. I noticed myself unable to stop some of the lecture-like discussions where someone felt the need of explaining, but the balance of doing and talking was still good. People were engaged. And it was clear to an external person that the team had strong skills and knowledge that was not shared, and in the mob format insights and proposals of what was “definition of done” for the test case got everyone’s contributions.

I learned a bit more on Robot Framework (and I’m still not a fan of introducing a Robot language on top of Python when working with one still would seem a route of less pain). I learned of use of Emacs (that I had somewhat forgotten) and could still live without. I learned on different emphasis people had on naming and documentation on tests. I learned on their ideas of grouping tests into suites and tagging them. I learned of thinking in terms of separating tests when the Do is different, not when Verify needs a few checks for completeness. I learned to Google Robot Framework Library references. And I learned that this team, just like many other teams at the company, is amazing. 

Asking around in retro, here’s what I got from the team: This was engaging, entertaining. We expected to cover more. The result of what we did was different than the first idea of what the test would be, and the different means better in this case. 


My main takeaway was to do this with my team on our automation. If I booked a session, they would show up. 

Thursday, September 21, 2017

What makes a test automation expert?

I was part of a working group that created an article called 125 Awesome Testers You Should Keep Your Eye on Always. It may not be obvious, but that list is a response to another article called 51 automated testing Experts You Should Keep Your Eye on Always. That list had only four women (at least it had four women!) and let me tell you a big public secret:
It is not because there aren't many awesome women in automation. It is because people don't look around and pay attention.
I could have many different criteria on what makes a test automation expert:
  • Speaks about test automation in public (conferences, articles) in a way that others find valuable
  • Does epic stuff on making automation work out and do real testing
  • Is identified as a creator of a test automation framework or library
  • Speaks only of automation and never in a manner that addresses its limits
The 125 awesome testers list does not identify automation separately, because I find that most people contribute to test automation in a significant way. Not all of people in either one of those lists have created an open source tool of their own. Not all people on either one of those lists write test automation code as their main thing.

We can be awesome at automation in so many ways. Writing code alone in a corner is not the only way. Many of us work in teams that collaborate: pair, or even mob. Coding is not the only way to do automation.
  • Delivering insights that are directly transferable to useful test automation is a way of doing automation. 
  • Working on the automation architecture, defining what we share is a way of doing automation.
  • Helping see what we've done through lenses of value in testing is a way of doing automation. 
  • Reading code without writing a line and commenting on what gets tested is a way of doing automation. 
  • Pairing and mobbing are ways of doing automation.
We don't say coding is all there is to application development, why would coding be all there is to  test automation development?
There's been a particular experience that has shaped my experience around this a lot, which is working with mob programming.  After programming with 14 different programming languages, I still identified as a non-programmer because my interests were wider. I actively forgot the experience I had, and downplayed it for decades. What changes me was seeing people who are programmers in action. I did not change because I started coding more. I changed because I started seeing that everyone codes so little. 

The image below is from a presentation of Anssi Lehtelä, a fellow tester in Finland who has also now two years of mob programming with his team under his belt. A core insight I find we share is that in coding, there is surprisingly little of coding. It's thinking and discussions. And that's what we've always been great at too! And don't forget googling - they google like crazy!

Lists tell you who the list maker follows. Check if you have even a possibility to recognize the awesome women in automation using http://proporti.onl on your twitter feed. It can be brutal. Mine is 53 % women. In the numbers I can follow, there's easily a brilliant, inspirational woman to match every single man. In any topic, including automation. Start hearing more voices.

Monday, September 18, 2017

Announcing an Awesome Conference - European Testing Conference 2018

TL;DR: European Testing Conference 2018 in Amsterdam February 19-20. Be there! 

Two months of Skype calls with 120 people submitting to European Testing Conference 2018 in Amsterdam has now transformed into a program. We're delighted to announce people you get to hear from, and topics you get to learn in the 2018 conference edition! Each one of these have been hand-picked for practical applicability and diversity of topics and experiences in a process of pair-interview. Thank you for the awesome selection team of 2018: Maaret Pyhäjärvi, Franziska Sauerwein, Julia Duran and Llewellyn Falco.

We have four keynotes for you balancing testing as testers and programmers know it, cultivating cross-learning:
  • Gojko Adzic will share on Painless Visual Testing
  • Lanette Creamer teaches us on how to Test Like a Cat
  • Jessica Kerr gives the programmer perspective with Coding is the easy part - Software Development is Mostly Testing
  • Zeger van Hese Power of Doubt - Becoming a Software Sceptic
With practical lessons in mind, we reserve 90 minute sessions for the following hands-on workshops you get to choose to participate two, as we repeat the sessions twice during the conference:
  • Lisa Crispin and Abby Bangser teach on Pipelines as Products Path to Production
  • Seb Rose and Gaspar Nagy teach  on Writing Better BDD Scenarios
  • Amber Race teaches on Exploratory Testing of REST APIs
  • Vernon Richards teaches on Scripted and Non-Scripted Testing
  • Alina Ionescu and Camil Braden teach on Use of Docker Containers
While workshops get your hands into learning, demo talks give you a view into looking someone experienced in doing something you would want to mimic. We wanted to do three of these side by side, but added an organizer bonus talk on something we felt strongly on. Our selection of Demo talks is:
  • Alexandra Schladebeck lets you see Exploratory Testing in Action
  • Dan Gilkerson shows you how to use Glance in making your GUI test code simpler and cleaner
  • Matthew Butt shows how to Unit/Integration Test Things that Seem Hard to Test
  • Llewellyn Falco builds a bridge for more complicated test oracles sharing on Property-Based Testing
Each of our normal talks introduces an actionable idea you can take back to your work. Our selection of these is:
  • Lynoure Braakman shared on Test Driven Development with Art of Minimal Test
  • Lisi Hocke and Toyer Mamoojee share on Finding a Learning Partner in Borderless Test Community
  • Desmond Delissen shares on a growth story of Two Rounds of Test Automation Frameworks
  • Linda Roy shares on API Testing Heuristics to teach Developers Better Testing
  • Pooja Shah introduces Building Alice, a Chat Bot and a Test Team mate
  • Amit Wertheimer teaches Structure of Test Automation Beyond just Page-Objects
  • Emily Bache shares on Testing on a Microservices Architecture
  • Ron Werner gets you into Mobile Crowdsourcing Experience
  • Mirjana Kolarov shares on Monitoring in Production 
  • Maaret Pyhäjärvi teaches How to Test A Text Field
In addition to all this, there's three collaborative sessions where everyone is a speaker. First there's a Speed Meet, where you  get to pick up topics of interest from others in fast rotation and make connections already before the first lunch. Later, there is a Lean Coffee which gives you a chance for deep discussions on testing and development topics of interest to the group you're discussing with. Finally, there's an Open Space where literally everyone can be a speaker, and bring out the topics and sessions we did not include in the program or where you want to deepen your understanding.

European Testing Conference is different. Don't miss out on the experience. Get your tickets now from http:/europeantestingconference.eu 

Saturday, September 16, 2017

How Would You Test a Text Field?

I've been doing tester interviews recently. I don't feel fully in control there as there's an established way of asking things that is more chatty than actionable, and my bias for action is increasing. I'm not worried that we hired the wrong people, quite the opposite. But I am worried we did not hire all the right people, and some people would shine better given a chance of doing instead of talking. 

One of the questions we've been using where it is easy to make step from theory to practice is How would you test a text field? I asked it in all, around a whiteboard when not on my private computer with all sorts of practice exercises. And I realized that the exercise tells a lot more when done on a practice exercise.

In the basic format, the question talks of how people think of testing and how they generate ideas. The basic format as I'm categorizing things here is heavily based on years of thinking and observation by two of my amazing colleagues at F-Secure Tuula Posti and Petri Kuikka, and originally inspired by discussions on some of the online forums some decades ago. 

Shallow examples without labels - wannabe testers

There's a group of people who want to become testers but yet have little idea of what they're into, and they usually tend to go for shallow examples without labels. 

They would typically give a few examples of values, without any explanation of why that value is of relevance in their mind: mentioning things like text, numbers and special characters. They would often try showing their knowledge by saying that individual text fields should be tested in unit testing, and suggest easy automation without explaining anything else on how that automation could be done. They might go talking about hardware requirements, just to show they are aware of environment but go too far in their idea of what is connected. They might jump into talking about writing all this into test cases so that they can plan and execute separately, and generate metrics on how many things they tried. They might suggest this is a really big task and suggest to set up a project with several people around it. And they would have a strong predefined idea of their own of what the text field looks like on screen, like just showing text. 

Seeing the world around a text box - functional testers

This group of people have been testers and caught up some of the ideas and lingo, but also often over reliance on one way of doing things. They usually see there's more than entering text to a text box that could go wrong (pressing the button, trying enter to send the text) and talk of user interface more than just the examples. They can quickly list categories of examples, but also stop that list quit quickly as if it was irrelevant question. They may mention a more varied set of ideas, and list alphabetic, numeric, special characters, double-byte characters, filling up the field with long text, making the field empty, copy-pasting to the field, trying to figure out the length of the field, erasing, fitting text into the visible box vs. scrolling, and suggest code snippets of HTML or SQL, the go to answer for security. They've learned there's many things you can input, and not just basic input into the field, but it also has dimensions. 

This group of people often wants to show the depth of their existing experience by moving the question away from what it is (the text field) to processes and emphasize experiences around how relevant it is to report to developers through bug reports, how they may not fix things correctly and how a lot of time goes into retesting and regression testing. 

Tricks in the bag come with labels - more experienced functional testers

This group of testers have been looking around enough to realize that there are labels for all of the examples others just list. They start talking of equivalence partitioning and boundary values, testing positive and negative scenarios and can list a lot of different values and even say why they consider they're different. When the list starts growing, they start pointing out that priority matters and not everything can be tested, and may even approach the idea of asking why would anyone care of this text field, where is it? But the question isn't the first  thing, the mechanic of possible values is.  They prioritization focus takes them to address use of time into testing it, and they question if it is valuable enough to be tested more. Their approach is more diversified and they often are aware that some of this stuff could be tested on unit level and others require it integrated. They may even ask if seeing the code is available. And when they want to enter HTML and SQL, they frame those not just as inputs but as ideas around security testing. The answer can end up long, and show off quite much of knowledge. And they often mention they would talk to people to get more, and that different stakeholders may have different ideas. 

Question askers - experienced and brave

There's a group who seems to know more even though they show less. This group realizes that testing is a lot about asking questions, and mechanistic approach of listing values is not going to be what it takes to succeed. They answer back with questions, and want to understand typically the user domain but at best also the technical solution. They question everything, starting with their understanding of the problem at hand. What are they assuming, and can that be assumed? When not given a context of where the text field is, they may show a few clearly different ones to be able to highlight their choices. Or if the information isn't given, they try to figure out ways of getting to that information. 

The small group I had together started just with brainstorming the answer. But this level wasn't where we left of. 


After the listing of ideas (and assumptions, there was a lot of that), I opened a web page on my computer with a text field and an ok button and had the group mob to explore, asking them to apply their ideas on this. Many of the things they mentioned in the listing exercise just before immediately got dropped - the piece of software and possibility to use it took people with it.

The three exercises

The first exercise was a trick exercise. I had just had them spend 10 minutes thinking how they would test, and mostly they had not thought about the actual functionality associated with the text field. Facing one, they started entering values and looking at output. Over time, they came up with theories but did not follow up testing those and got quite confused. The application's text field had no functionality, only the button had. After a while, they realized to go into dev tools and the code. And were still confused with what the application did. And with a few rounds of three minutes each on the keyboard, I had us move on to the next example. 

The second exercise was text box in the context of a fairly simple editor application, but one where focusing on the text box alone without the functions immediately connected to the text box (unit test perspective) would miss a lot of information. The group was strong on ideas, but weaker on execution. When giving a value, what a tester has to do is to stop (very shortly) and look at what they learned. The learning wasn't articulated. They missed things that went wrong. Things where to me, an experienced exploratory tester, the application is almost shouting to tell how it is broken. But they also found things I did not remember, like the fact that copy pasting did not work. With hints and guidance through questions, I got them to realize where the text box was connected (software tends to save stuff somewhere) and eventually we were able to understand what we could do with the application and what with the file it connects to. We generated ideas around automation, not through the GUI but the file and discussed what kind of things that would enable us to test.  When asked to draw a conceptual picture of relevant pieces, they did good. There was more connections to be found, but that takes either a lot of practice on exploring or more time to learning layers. 

Again with the second exercise, I was left puzzled on what I observed. They had a lot of ideas as a group on what to try, but less of discipline in trying that out of following what they had tried. While they could talk of equivalence partitioning or boundaries, their actual approaches on thinking what values are equivalent and learning more as they used the application left something to hope for. The sources of actual data were interesting to see, "I want a long text" ended up as something they could measure but unawareness immediately of an online tool that would help with that. They knew some existed but did not go to get those. It could have been a priority call, but they also did not talk about doing a priority call. When the application revealed new functionality, I was making a mental note of new features of the text box I should test. And when that functionality (ellipsis shortening) changed into another (scroll bars), I had a bug in mind. Either they paid no attention, or I pointed that out too soon. Observation and reflection of the results was not as strong as idea generation. 

The third exercise was a text field in a learning API, and watching that testing unfold was one of the most interesting ones. One in the group quickly created categories of three outputs that could be investigated separately. This one was on my list because the works right / wrong is multifaceted, and in the perspective of where the functionality would be used and how reliable it would need to be. Interestingly in the short timeframe we stick with data we could easily generate, and this third one gave me a lot to think about as I made later one of my teams's developers test it, getting them even stronger into testing an output at a time, and insisting on never testing an algorithm without knowing what it includes. I regularly test their algorithms of assessing if the algorithm was good enough for the purpose of use, and found that the discussion was around "you shouldn't do that, that is not what testers are for". 

The session gave me a lot of food for thought. Enough so that I will turn this into a shorter session teaching some of the techniques of how to actually be valuable. And since my conference tracks planned are already full, I might just take an extra room for myself to try this out as the fourth track. 





Friday, September 15, 2017

Fixing Wishful Thinking

There's a major feature I've been testing for over a period of six months now.  Things like this are my favorite type of testing activity. We work together over longer period of time, delivering more layers as time progresses. We learn, and add insights, not only through external feedback but also because we have time to let our own research sink in. We can approach things with realism - the first versions will be iterated on, and whatever we are creating is indefinitely subject to change.

I remember when we started, and drew the first architectural diagrams on the wall. I remember when we discussed the feature in threat modeling and some things I felt I could not get through before got addressed with arguments around security. I remember how I collected dozens of claims from the threat modeling session as well as all discussions around the feature, and diligently used those to drive my learning while exploratory testing the product.

I'm pretty happy with the way that feature got tested. How every round of exploration stretched our assumptions a little more, giving people feedback that they could barely accept at that point.

Thinking of the pattern I've been through, I'm naming it Wishful Thinking. Time and time again with this feature and this developer, I've needed to address very specific types of bugs. Namely ones around design that isn't quite enough.

The most recent examples came from a learning that certain standard identifiers have different formats. I suggested this insight should lead to us testing the different formats. I did not feel much of support for it, not active resistance either. I heard how the API we're connecting to *must* already deal with it - one of the claims I write down and test against. So I tested two routes, through a provided UI and the API we connect with, only to learn that my hunch got confirmed - the API was missing functionality someone else's UI on top of it added, and we, connecting with the API missed.

A day later, we no longer missed it. And this has been a pattern throughout the six months.

My lesson: as a tester, sometimes you work to fix Wishful thinking. The services you are providing are a response to the developers you're helping out. And they might not understand at all what they need, but appreciate what they get - only in small digestible chunks, timed with enforcing the message into something actionable right now.


Learning through embarrassment

Through using one video from users to make a point, we created a habit of users sharing videos of problems. That is kind of awesome. But there's certain amount of pain watching people in pain, even if it wasn't always the software failing in software ways, but the software failing in people ways. 

There was one video that I got to watch that was particularly embarrassing. It was embarrassing, because the problem shown in that video did not need a video. It was simply a basic thing breaking in an easy way. One where the fix is probably as fast as the video - with only a bit of exaggeration. But what made it embarrassing was the fact that while I had tested that stuff earlier, I had not recently and felt personal ownership.

The sound track of that video was particularly painful. The discussion was around 'how do you test this stuff, you just can't test this stuff' - words I've heard so many times in my career. A lot of times the problems attributed to lack of testing are known issues incorrectly prioritized for fixing. But this one was very clearly just it - lack of testing. 

It again left me thinking of over reliance on test automation (I've written about that before) and how in my previous place of work lack of automation made us more careful. 

Beating oneself up isn't useful, but one heuristic I can come up with this learning: when all eyes are on you, the small and obvious becomes more relevant. The time dimension matters. 

I know how I will test next week. And that will include pairing on exploratory testing. 

Saturday, September 9, 2017

Burden of Evidence


This week, I shared an epic win around an epic fail with women in testing slack community, as we have a channel dedicated for brag and appreciate - sharing stuff in more words than on twitter, knowing your positive enforcement of awesome things won't be taken against you. My epic win was around case of being heard on something important, where not being heard and not fighting the case was the epic fail.

On top of thinking a lot around the ways of how I make my case to get feedback reacted on, like bringing in real customer voice, the person I adore the most in the world of testing twitter tweeted just on the topic.
I got excited, thoughts rushing through my head on the ways of presenting evidence, and making both the objectively and emotionally appealing case. And it is so true, it is a big part of what I do - what we do in testing.

As great people come, another great one in twitterverse tweeted in response:
This stopped me to think a little more. What if I did not have to fight, as a tester, to get my concerns addressed? What if the first assumption wasn't that the problem isn't relevant (it is clearly relevant to me, why would I talk about it otherwise?) and that burden of evidence is on me? What if we listened, believed, trusted? What if we did not need to spend us much time on the evidence as testers, what if hunch of evidence was enough and we could collaborate on getting just enough to do the right things as response?

Wouldn't the world this way be more wonderful? And what is really stopping us from changing the assumed role of a tester from being one with the burden of evidence to someone helping identify what research we would need to be conducting on the practical implications of quality?

Creating evidence takes time. We need some level of evidence in the projects to make right decisions. But I, as a tester, often need more evidence than the project really needs just to be heard and believed.  And a big part of my core skillset is navigating the world in a way where when there's a will, there's a way: I get heard. It just takes work. Lots of it.