Posted: June 26, 2013 Filed under: start up | Tags: A-B testing, innovation, Start-up, testing new idea, user segmentation
I won’t go into how testing concepts prior to investment or full-blown execution is important. You probably know this already or else you would not land here.
What strikes me as odd though is that i have not come across much material about how to go about testing. So here is a short-list that i have to realize summarizes the most common pitfalls:
1. Test the product’s core statement
Focus on the essence of the service. Strip it down from all possible bells and whistles and try to confirm or reject the basic hypothesis about the project.
Essentially try to answer two questions:
a. which problem is the product solving?
b. is this problem important enough for the target group? (to look for it, to return to it, to pay for it)
If either (a) or (b) are not a 100% YES, then you are sure to fail. No doubt about it.
2. Get more educated on the subject
Assuming that (1) is covered then you need to accept that you don’t know as much as you should about the problem you are trying to solve.
For example, let’s say you want to develop a video aggregation service for sports lovers as there is so much sport in the world. And you go through step 1 and the need for a solution is indeed validated. So let’s say that you develop a vision to create an elaborate algorithm to gather data from various sources and incorporate also machine-learning to personalize the service. Stop. First find out what matters most to your target group. What is it that they are lacking now? You should not answer this based on yourself as this would be the equivalent of running a survey with a focus group of one person.
The biggest risk here is to focus on the wrong features, waste time and energy on developing, let’s say, machine learning technology to optimize personalized ranking of videos to find out that users care about something so specific that could be easily pinpointed without the need of developing a self-improving mechanism. Once you research a bit, then you will be in a position to design and build an MVP.
3. Build a prototype (MVP)
In the example above, just do the service manually. Build a tool to facilitate a person to manually place videos in a playlist. You don’t have the time to do it, hire someone. It will cost you far less than building a machine to do it. Also chances are that the person will do the work better than the algorithm will anyway. Do a small ad campaign and direct traffic to a site showing the editor’s playlist of sports videos. Then you study visitors’ behavior: average time on site, number of pageviews per visit, how many visitors actually return, etc. If the concept is interesting it will show some positive signs. In case you believe that personalization is key to the service and the one-editor-for-all approach does not do justice to the concept, then segment users really finely. In the spots video aggregation example, target fans of a particular team in a specific sport. Then the editor should focus on that team, and safely assume that the playlist would be as good as a personalized one for the particular target group.
4. Design a test for maximum knowledge
The test’s goal is to gather knowledge. Aim for that. Let’s assume you want to launch a new service to a userbase you have access to. Don’t send the invite to users randomly. Segment users based on every attribute you have. And do the segmentation separately. One for each attribute. Separate the userbase in three groups based on each of the attributes: low 50%, medium 30% and top 20%. Whatever the results come out to be, you will be able to draw some conclusions about the userbase and the service’s appeal.
5. Change one item at a time
Otherwise you won’t know where to attribute the difference. So, for example, if you are testing how different usergroups behave, you need to make sure that you keep everything the same. Send the same email, on the same day and time, keep the same landing page, etc. If you want to test two different email subject lines, the usergroups need to be the same, etc.
6. Segment before time = 0
Segment users before the time of the test, not after. What not to do: run the test to a random sample and then see who participates and analyze that group. This could lead to a number of problems. For example, the test could alter the segmentation, e.g. trigger visits to the site, and hence shift users from one usergroup to the other. Or high users will most probably end up being under-represented if the sample is randomly selected, and according to the Pareto rule those aree the most important people that one should focus on.
7. Define what you are looking at
Carefully define the metrics you will be looking at after the done is done. Make sure you will get the information you will need. Write it down in a table. Confirm that all placeholders will be filled in with numbers after the test runs.
8. Put down actual values before hand
Take the time to predict what you are expecting to get. This is necessary also to budget and time the test but it will also help you to evaluate the end result. It will show show you where to focus your attention to: where your assumption proved to be far off. Do not put down too much data to look at as you risk losing focus. Keep your eyes on what confirms/ reject the primary hypothesis at the heart of the product’s value.
Harold Camping (pictured in December 2002), predicted doomsday to arrive on the following day, May 21, 2011 (http://goo.gl/QVVqW
9. Be fast.
The faster you design, execute & evaluate a test, the faster you will move on with the next one. The more tests you do, the more educated you become and the better the decisions you make. By working fast you are not being sloppy, you are maximizing the knowledge per time you collect. And it is all about knowledge at this point when you are researching.
10. Look out for interferences
Watch out for outside factors affecting the test. For example, if the testing is done via email then make sure that on that day you don’t also send the weekly newsletter. This is something that could affect behavior in a random manner.
Please share any thoughts or personal experiences of new idea typical pitfalls in the comments section. I would be really interested to find out about them now, before hand.
Posted: June 3, 2013 Filed under: Uncategorized | Tags: email marketing, mobile inbox, real-life case study
See below a screenshot from my mobile inbox. Stats say that mobile inbox is already the nr1 medium for reviewing emails. Most emails get deleted on mobile devices than anywhere else. Hence it makes sense to take a few minutes and review how your emails appear on mobile devices. For the sake of the exercise I am using the most popular single handset: the iPhone. The email senders are chosen at random and include:
- Saks – the world famous retailer
- Netrobe – a fashion iOS app that I am currently getting involved with as an advisor
- Kotsovolos – Greek subsidiary of Dixons
- MelinaMay – leading online stock outlet for the Greek market.
Saks: apparel retailer
(-) There are actually two emails from Saks in my inbox. That is too much given that I have never ever purchased anything from Saks. Saks should have segmented me as an inactive user, tested multiple messages on that group and then send the best performing one to all inactive users in hope that the maximum of inactive users will convert to paying customers. Two emails on one day is a sure sign that Saks has not thoroughly tested those messages and also, as a recipient, I realize that these messages are not anything special that deserve my attention more than any of the previous messages I have been ignoring.
(+) Sender: varied sender helps as it differentiates the two messages and makes recipients think that the message actually is sent from two different sources – reduces the feeling of getting spammed.
(+) Positive marks also on correct targeting, serving me messages for males while indeed I am a male.
(+) Top message has a fairly effective title kicking off with a most impressive 70% discount. Also makes clear that it refers to male products. Not so keen on the Must-Haves descriptor which is too vague and hence conveys no real info.
(-) Negative marks for the descriptor below the title especially for the top email, which repeats the Sender and then reads like random text without any coherency. The bottom email reads a bit more logically but still it feels like an AdWord campaign unsuitable for a personalized message to an already subscriber.
NETROBE: a cool iOS app to help fashionistas organize their wardrobe
(-) Not cool though that it repeats its brand name in the sender and title – waste of character real estate really
(-) Also not cool that the title conveys no information – the real information, i.e. the topic of the email, is replaced by “…” as it lies beyond iPhone’s character limit
(+) The descriptor is ok as it reads as a logical text and provides brief product descriptors and brands that could engage recipients.
Kotsovolos Dixons: electronics retailer
(+) Good use of title and title effectively communicating who sends the email and what it refers to
(-) Negative marks for the topic of the email being only a CSR message (= look at us how great we are) offering no real incentive to read it. The correct implementation should include an invitation/ CTA on how the recipient can help or maybe what the Kotsovolos Dixons customers have already done to contribute to this good deed.
(-) Big time negative marks for having left a generic text as a descriptor – Kotsovolos Dixons must have never paid any attention to the mobile inbox.
MelinaMay Fashion: apparel discount retailer
(+) Positive marks on using favicons in the title and especially in the start of the title. A sure way to boost engagement. Also it matches well with the title’s copy which is always a plus.
(+) Good use of character real estate informing recipients that new items have arrived. Important incentive for the bargain hunter to read the email and find out what type of products came in.
(-) Missed opportunity to use the descriptor below the title to outline some of the product types, brands etc. that just came in.
Got any other comments on the emails showing in the photo or disagree with any of the points, feel free to say so in the comments section.