Tuesday, January 23, 2018

Test Automation Smells so Obvious Anyone Could Notice

Like so many exploratory testing enthusiasts before me, I can easily find things to do with my time so that I don't have to go dig into the realms of test automation. But every now and then, unlike so many exploratory testing enthusiasts, I go and take a dig at automation anyway. It reads mostly like English anyway.



Here's my list of things to work on inspired looking at one set of tests today.

Tests that don't test only tour

So you have some kind of structure with your tests. You have test suites / sets somewhere that bundle things together. You have some tests going into those suites /sets. And you have some libraries you can use. Great start. But take a look at the things in this that are considered tests. Can you find any that haven't got a single verification, asserting that something must be true? When you do, these are tests that don't really test, they tour. They are a path to getting to a place where you actually want to do some testing. Don't muddle them together on the same level as things that actually are checking things.

Tests inheriting tests over libraries

Inheritance can make things very muddled for  your random stroller. You're trying to read steps of what happens, and for sake of similarity, someone came up with the great idea of inheriting and overriding to create the differences. Same things would appear like they could be done with libraries, instead of test cases. Do you really have to conceptually mix up test cases with inheritance?

Randoms inside a test

You see a test that says TestABC. Great, it looks like this is a test that tests ABC. You continue reading, and a call to random pops in front of you. Whenever this test gets run, it always does ABC but there's a few different ways to do ABC. Someone decided to leave which of the ways you get to faith, making sure every time it fails you need to  go and check with of the options was broken. Isn't there enough lack of deterministic behavior in the tests without adding randoms in?

Avoiding passing values

Reading the code, you start realizing a whole lot of method calls look like they're the same yet they are not - quite. There's some little detail on each that differs. Your method could take an argument, and you could do some magic with the argument. But for some reason it seems it was a better idea to create a number of separate methods to call, without argument but each including what looks awfully lot like a thing that would have belonged in the place of an argument in its name.

Duplicating to my corner

There's a directory there somewhere, that uses a label you recognize as a concept meaning ownership that draws you in. And you find a nice cozy corner of clearly laid out tests. Reading through their names, you start to feel like you've seen some of them before. Searching confirms - there's a well organized little corner there somewhere what everything is concisely together. There however is a lot of duplicated code elsewhere. But that must be someone else's problem.


A Fool's Coverage

As an exploratory tester, you start realizing how much coverage there could be if the collaboration between what you learn and know would better turn into automation. You identify some of the English in the tests that makes sense in the world of using the application for exploring it, and realize how little of your ideas have ended up encoded into the tests. Whatever is in, gets continuously monitored. Running the same thing a thousand times isn't really yet that much in the coverage - it's a version of a fool's coverage.



Here's a thing: anything I can name and recognize, I can fix. And while these feel obvious to me today, they clearly have not been equally obvious when they were introduced. What does your "let's fix these" list look like?


Monday, January 22, 2018

Learning Programming Test-Driven

I've been delivering a test automation course, in a mob format. I get the whole group together, to work on test automation activities. We first get an experience that we share, and then we talk about what we learned and need to know about automation.

We've typically spent a couple of sessions on Selenium web driver, first just creating a test that runs, but soon after refactoring the tests so that we'd have more of a page object pattern in use. We spend a session on ApprovalTests, just to figure out what could be tested if we approached testing with creating and comparing to a "golden master". We test an API in a "fill in the blanks" kind of style. And finally, we spend a session starting a program blank, seeing where Test-Driven Development takes us.

The Test-Driven activity has turned out to be my favorite. I love how the tests grow with the application. I love how small steps end up taking us to a coverage we wouldn't create later on if we tested after. But what I love the most is the way the group - usually full of people who don't identify as programmers, some people who have never written code in their life - reacts to it.

We usually do very simple problems like FizzBuzz. Getting the 1st test out of the group asking for a single example is often the hardest part. Turning the example to code some syntax needs to be revealed, but none of the cruft around classes, just a single test. And when most of the programming in the IDE is learning to use the automatic generation of methods, people come out with a very different feel to programming than if they had to know how to write all the magic words around it.

We've done this in Python. We've done it in C# while I still had Visual Studio. We most often do it in Java. And there is little difference in these two when the full editor support is available.

The reactions of the learners I've worked with make me want to experiment with this more. What if we could teach programming so that we did not go through the usual moves of naming concepts, but we could teach test first and experience first. Naming what you did can come later. Even much later, when you've seen several examples of how some things are done or used.


Thursday, January 18, 2018

I'm an awesome tester who also happens to be woman

At a test automation conference, I started talking with another participant. We exchanged details of the craft through name dropping and content appreciations. We talked about his upcoming talk, and I shared some ideas on the same topic. Excited on the discussion we were having, he decided to suggest I could do a lightning talk in the evening.

It was great that he suggested - I was not aware there was one, let alone that I could still get listed and that the voting was going on throughout the day. I immediately made up my mind.

But he continued: "we need women speaking".

I said nothing, more like shrugged. But decided to address it when speaking.

I introduced myself. I shared that I was here doing a lightning talk because another participant encouraged me to so that there would be women speaking. Needless to say, I was the only woman on the lightning talks round. I also mentioned that while it feels that my main reason to be here would be my gender, I'm here just as a person with credentials: that this is talk number 353 that I'm delivering, and that Kent Beck said yesterday he loved my book. I made a joke about it, because it was a better option for feeling uncomfortable than being silent about it.

After the talks, I talked again with the other participant. He apologized and invited ideas of how he could have better expressed his excitement on what I had potential on sharing. I suggested that reminding me on my gender (any woman on their gender) could be a pattern to avoid. They're already painfully aware.

The conference had no women in organizing committee.
There was one woman speaker out of 9 talks.
There were LOTS of women in the audience.

There's no absolute "best" in speakers. Speakers tell stories that are based on the life lived. We need diverse voices. What I said today to open my lightning talk was something that just happened to me: all three details on my 'credentials'. I bet no one encouraged any of the men in a gender-based fashion.

I make lists of awesome testers. Notice the list never mentions "awesome women testers". Because the qualifier of gender isn't relevant. They are awesome, just as they are. Yet looks like no one can share and promote that list without mentioning gender.



Sunday, January 14, 2018

Why positive discrimination is equality over time

I remember a spring day 20 years ago. I was a university student, who had just taken a course on public presenting with a teacher that turned out to be the most transformative in my life. What I remember is not that her forcing me to watch me speak on video made me realize my inner world was much more messy than my presentation. I remember her for one of my first discussions on feminism.

I was a hopelessly shy student who believed she possessed little opinions. And even if I did, I was very uncomfortable sharing them. While on that course, I read news every day to force myself to be even remotely able to have group discussions on day-to-day topics. That just wasn't me.

So when in the end of the course my teacher told me in private that she thought I was a feminist, I responded like so many women: I wasn't. I did not need to be. There was nothing wrong with equality. If anything, I was always just positively discriminated.

I did not think about that discussion for a very long time, but obviously years since have changed my perspective and raised my awareness on need of feminism. There's tons of wonderful writings on the problems and solutions, and my concern is still not that I get regularly mistreated, but that I've needed in many ways to be exceptional when normal should be enough.

A few days ago, I retweeted this:

Let's look at what it claims:

  • Women generally apply for jobs only when they meet all the requirements
  • Some women apply without meeting all the requirements and that requires them extra effort because it is against what they'd naturally do.
Just sharing this tweet meant there was someone puzzled asking to be educated (extra work on women when all the resources are already available). To be honest, it did not sound as asking for education, much more on explaining to me why I shared a tweet that was just wrong using "one woman got selected with us even though she did not fill all the requirements" as evidence that this is not a general trend. Still, see point 2 above: she might have needed to exert extra effort to apply. Regardless, one data point isn't enough. 

In our discussion we got soon to a point I see commonly coming from women: There should not be positive discrimination - "I don't want to be selected for my gender"

The thing is, acts of discrimination are a long-term phenomenon, and we need to look at it discrimination over long term, not as individual event happening at individual job interview.
  • When I was 10, my family purchased our very first computer, and we had different ones ever since. They always were located in my (younger) brother's room and I asked for permission to use it as budget rules gave me space. His access was less limited. 
  • He started working with programming seriously at age of 12 (I was 14). His friends were all into it. I coded games by typing them from magazines already at 12, but I never had a single friend who'd do that with me. 
  • By time we both went to university to study computer science, he had 7 years of hobbyists programming because "computers were boys toys". I had a starting interest with time spent on BBSs and rudimentary programming I had done on "Teaching myself Turbo-Pascal" as schools gave you space to learn, they did not teach anything back then.  Most girls were not quite so advanced. 
  • Most of my university students were with backgrounds akin to my brother. I was years behind. In addition, if I ever did group work, I got told I probably did not contribute anything. Both other students but also some teachers. I needed to continuously keep proof of my contributions, or work alone when others got to work in groups. 
  • Any course with classroom exercises were my nightmare. There were 2% women and many teachers believed both genders needed to speak every time. I learned to skip classes to suffer less. Again, more work just to survive.
  • If I was ready to get help, I had lots of the classmates helping me. Usually with the price of figuring out if I was single or not. 
Back then, this was how things were. I wasn't brave enough to call out any of this. I thought it was normal. That was the world I had always been in. I had no feminist friends to make me aware this was exceptional. I went through it all with plain stubbornness. 

I know I'm not alone with my experience.

So when then I get invited to a conference past the call for proposal process, I recognize that is positive discrimination. Similarly, if two candidates in job interview seem equal and the woman gets selected, that could also be positive discrimination. But we really don't hire just for skills of today, but potential of tomorrow. So it is less straightforward. 

With all the debt on the negative discrimination I've got to go through, I'm nowhere near equality yet. 

So I believe in equity. We need to help those who need more help more than those who started off in a more privileged position. Positive discrimination of equality over time - equity today. 

My story is one of a privileged white woman. The stuff other underprivileged groups go through means we need to compensate for them much longer. 

PS: I spent 30 minutes writing this post and I've had thousands of discussions like this in my lifetime since I realized I'm a feminist. Imagine what those not needing to have these discussions get accomplished with that time. 



Friday, January 12, 2018

All I got for a week of programming was one lousy test script


From the title, you might think the post is about venting on how slow it is to learn automation. If that's what you are looking for, this is not that post. Instead, this is a post about insights of what happens while we program test automation.

There was a fairly simple end to end scenario that needed testing. The tool of choice was Python, and examples of doing something fairly similar were plentiful. 

To maintain the focus, the scenario was first drafted just as code comments. The steps the script should go through. The verifications that needed to happen along the way. The way we would determine what to make note of while the test was running, and what would be things that need to stop the test from proceeding as it just makes no sense. 

It could all be very simple, except it almost never isn't. 

First of all, to figure out the scenario, some details of what to check require the external imagination: product we are testing. Seeing the details of what could be verified need hands on the computer. We could call that automating, but what we actually do is mostly manual. Sometimes we can run the start of the script to get to the point of pondering. But  the pondering is still a manual process. We look at what is available that we could programmatically access. We think of what is good enough to determine if things work, and how the actual application would allow us to see things with code. 

As we get to a manual process, we learn that while I wanted to do a thing, for some reason it does not work. We find bugs. Some of the bugs we notice when we just run through the scenario manually. Other bugs we notice, because automation is picky. Where a person can just work around some deficiencies, automation may get us momentarily stuck. Something else needs changing before the script can proceed. And we end up with todo-markings in our automation code, even fixing the problems on the application ourselves just to be able to make progress. 

Towards the end of the week, multiple little learnings later with blocking bugs fixed, we finally get the script to a point where it runs in its intended scope. Allowing then to think outside this little agreed box that took the whole week, there's more. But also, just going though this one scenario is already making the work of adding another easier. There will again be bugs, but they will be different. The scenario we already automated gets run since its introduction, alerting us on possible regressions. 

I write this post because I read that "Testing as an exploratory, investigative activity, cannot be replaced by automated checks". It bothers me how often we testers say this. The automated checks are done by people too. The human part of a check precedes creating the automation that successfully executes things. It grows as we add more checks. Many times when automating, we need to look with more detail. 

The risk to good testing isn't in including automation into the way we work. It is in not looking wide if automation gives you the sense of already covering what ever scenarios are relevant. The risk is the automators who say "this is fully tested" when there really is one happy day scenario with one set of very limited data and selections. 

Automation has so much power as a way of executable documentation.

Thursday, January 11, 2018

It ain't bragging if it is true

I'm ready to blog about this:

Let me start of with quoting the colleague in question, with her permission:
I had a session with Maaret, where we went through things I do in my job as a software tester. It amazed me how difficult it was to brag about myself and even more, how difficult it was for me to see all the things I do.

We used white table and Maaret wrote all the thing she has seen me doing. I was speechless. I just nodded, yes yes, that's what I do - I just didn't understand it was worth for mentioning. It was just "business as usual".

It is really hard to try to prove to your boss how useful you are, when all the things you do happen in the background without any "hard evidence".

I'm so grateful Maaret took time and went all this through with me, it was an eye-opening session for me, too, and now I have hard evidence to present to my boss :)
You've been here, right? Feeling that you do a good job, feeling that talking about what you consider good is bragging and that bragging is just awful. It's so awful you can't even find the words to talk the truth to the power when it matters to you personally the most: when someone is deciding on your future.

So how do you learn to brag about your contributions?

Ask someone else to brag for you

It if often easier to notice the good in others than in yourself. Listen to when people say nice things about you and in addition to avoiding the "oh, that was nothing" that comes out all too often, replace it with "Thank you" and a deep mental note on what it was about. You belittle it. Don't. Small things are much bigger than you sometimes give them credit for.

You can also start specifically asking for feedback. And asking for help, like my colleague did is also ok. Saying things that you hide deep because you don't give them credit can be hard without giving the other a chance of observing you. I worked in the same room for a month to be able to recognize some of the unique ways of working that fits her personality. We do the same work differently, even to same results. 

Practice

Start your bragging small, for a safe audience. If you start to learn to brag on your work, effort and results to your boss, you could frame it as "I'm practicing making my contribution visible". Invite feedback. If your boss isn't the person you feel safe with, find someone who is.

Within Women in Testing Slack Community, we have (from Gitte Klitgaard's initiative, isn't she awesome!) a #BragAndAppreciate channel that gives excellent opportunities on trying out ways of saying something in positive tone. Small and big brags are equally welcome.

I've had chances of assessing my feelings on bragging with various coaches, guiding (forcing) me through bragging exercises. Realizing almost everyone sucks at bragging and that we are culturally and structurally conditioned to not brag helps in giving yourself a permission to try it out.

Start small, grow. Encourage others to share positive. Share positive of others actively, and look at which ones also apply to you. 

Focus on the positive

Play your strengths, everyone knows you have weaknesses anyway. It is not dishonest to just focus on the positive, and building a case for why you are doing good. You're asked why you don't speak out in design meetings more, focus on what you do instead: listen fully without preparing to answer, digest, let things sink in. Focus on describing what do you do with the information after it sunk in. Make the 1:1 discussions that no one else pays attention to visible.

You will feel guilty about things and you feel you'd want to say out loud some of them. Learn not to. Say them at a different time. Don't belittle yourself. Others in this world do too much of it already.

Tell stories

To have really good bragging is to channel being proud and boastful when you talk about things you do and achieve (learning is an achievement, failing is an option). Include a story of what really happened, examples keep things real. For example my colleague is a thorough, patient, detail-oriented tester. Instead of saying she finds a lot of bugs to discuss and then report, tell a story of the time you tested. Choose one that is exemplary or recent. You can't tell all the stories, choose one that shows you in a good light.

Manage up

"It ain't bragging if it's true" is attributed to actor/humorist Will Rogers. You could say it's lying, not bragging if it isn't true. But the difference here is to look at yourself and your work in the best possible light. Shine the light on the good parts. We'll notice the others if they are relevant without you personally pointing them out as disclaimers every time you speak of yourself. We can talk of those at another time.

Appreciate what you do. All what you do. You use a lot of time on doing it. It is more worth appreciating than you realize. You need to appreciate yourself so that your boss can learn to appreciate you more through your views. Your career is yours, and too important to be left on your manager. It's more of a collaboration, you drive your own future.

And to end the story my colleague started, one step further: she presented the list of what she does to her boss, only slightly apologizing that she needed to share all of this stuff. But the best part to me was what she said right after: "My throat is hurting, I was talking so much in this meeting". Mild bragging accomplished and adored.






 



Sunday, January 7, 2018

My #1 thing to Add With Testing

Over the years, I've had the pleasure working with many kinds of developers. There's been those who struggle and barely get the code written, and testing for them is often somewhat painful. Fixing makes things more broken. And everything I touch feels broken. The majority, however, succeeds fairly well both in creating something and changing it on feedback. And then there's the small lovely group of test-driven developers who again are almost like a different species on the level of trust (or mechanisms of creating/maintaining trust) one can place on their changes.

There is, however, one type of testing that I've been thinking about, that tends to find problems of relevance with all sorts of developers. And that is one focused on the environment around the software we are creating.

I remember a big revelation years ago on what system testing can mean. I was testing a security scanning software on a mobile platform, and majority of things I needed to test was whether other applications and services other applications use still work with this software installed. It was by no means obvious. The system was much more than the mechanics of the software we created. It was everything our software touched. The software was special in comparison to many others, hooking deep into the operating system in ways that with the possible combinations of differences in firmware could result in interesting behaviors.

As I was testing ApprovalTests for the first time, the very first things I went through were environment setup. I had my C# environment, with two different test runners (there's more options though) and I started setting up the thing I was about to test, failing miserably. I had just hit a bug that soon got fixed (and forgotten) - the installation path through nuget would fail in cases where there were more than one runner installed. Again, the software failed for the environment it was put in.

Similar problems were there with the latest feature I was testing. It was fine "on my machine". But if "my machine" got more complicated, with competing ways of using same services available, it would fail in interesting ways.

So, when testing, remember you're not testing just the software as the requirements seem to state. That software is supposed to live in an environment with other software. It has a lifecycle. It relies on shared services.

Sometimes, the environment with other software is not for your company to control. Who gets assigned blame on a problem of incompatibility? Usually the one who comes in last. You might at least want to think through what other software your software is supposed to live with, and test for those.