Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Getting Your Apps Ready for Nexus 6 and Nexus 9

Android Developers Blog - Fri, 10/24/2014 - 00:53

By Katherine Kuan, Developer Advocate

Updated material design Tumblr app on Nexus 6.

Last week, we unveiled the Nexus 6 and Nexus 9, the newest additions to our Nexus family that will ship with Android 5.0 Lollipop. Together, they deliver a pure Google experience, showcasing fresh visual styles with material design, improved performance, and additional features.

Let’s make sure your apps and games are optimized to give your users the best mobile experience on these devices. We’ve outlined some best practices below.

Nexus 6 Screen

The Nexus 6 boasts an impressive 5.96” Quad HD screen display at a resolution of 2560 x 1440 (493 ppi). This translates to ~ 730 x 410 dp (density independent pixels).

Check your assets

It has a quantized density of 560 dpi, which falls in between the xxhdpi and xxxhdpi primary density buckets. For the Nexus 6, the platform will scale down xxxhdpi assets, but if those aren’t available, then it will scale up xxhdpi assets.

Provide at least an xxxhdpi app icon because devices can display large app icons on the launcher. It’s best practice to place your app icons in mipmap- folders (not the drawable- folders) because they are used at resolutions different from the device’s current density. For example, an xxxhdpi app icon can be used on the launcher for an xxhdpi device.

res/
   mipmap-mdpi/
      ic_launcher.png
   mipmap-hdpi/
      ic_launcher.png
   mipmap-xhdpi/
      ic_launcher.png  
   mipmap-xxhdpi/
      ic_launcher.png
   mipmap-xxxhdpi/   
      ic_launcher.png  # App icon used on Nexus 6 device launcher

Choosing to add xxxhdpi versions for the rest of your assets will provide a sharper visual experience on the Nexus 6, but does increase apk size, so you should make an appropriate decision for your app.

res/
   drawable-mdpi/
      ic_sunny.png
   drawable-hdpi/
      ic_sunny.png
   drawable-xhdpi/   
      ic_sunny.png
   drawable-xxhdpi/  # Fall back to these if xxxhdpi versions aren’t available
      ic_sunny.png 
   drawable-xxxhdpi/ # Higher resolution assets for Nexus 6
      ic_sunny.png
Make sure you are not filtered on Google Play

If you are using the <compatible-screens> element in the AndroidManifest.xml file, you should stop using it because it’s not scalable to re-compile and publish your app each time new devices come out. However, if you must use it, make sure to update the manifest to add the configuration for these devices (by screen size and density). Otherwise your app may be excluded from Google Play search results on these devices.

Nexus 9 Screen

The Nexus 9 is a premium 8.9” tablet with a screen size of 2048 x 1536 pixels (288 ppi), which translates to 1024 x 768 dip. This is a 4:3 aspect ratio, which is unique compared to earlier tablets. The Nexus 9 falls into the xhdpi density bucket, and you should already have assets in the drawable-xhdpi folder.

Updated Material Design Wall Street Journal app on Nexus 9.

Enable NDK apps for 64-bit

The Nexus 9 runs on a 64-bit Dual Core processor, which makes it the first Android device to ship with a 64-bit ARM instruction set. Support for 64-bit processors was just added in Android 5.0, so if you have an NDK app, enable it by updating the APP_ABI value in your Application.mk file:

APP_ABI := armeabi armeabi-v7a arm64-v8a x86 x86_64 mips mips64

More detailed instructions are provided in the developer site. You can test your 64-bit enabled app on a physical device with a 64-bit processor running Android 5.0, or take advantage of the recently announced 64-bit emulator in Android Studio.

Update your hardware keyboard support

The Nexus 9 Keyboard Folio will be available as an accessory in Google Play. It’s very important that you don’t lock your app to a single orientation. The Nexus 9’s natural orientation is portrait mode, while it’s used in landscape mode with the keyboard. If you lock to the device’s natural orientation, the app may appear sideways for devices with keyboards.

Users should be able to navigate around the main content of the app with the keyboard, while relying on touch input or keyboard shortcuts for toolbar actions and button bars. Therefore, ensure that your app has proper keyboard navigation and shortcuts for primary actions. Keyboard shortcuts that are invoked with Ctrl + [shortcut] combo can be defined via menu items using:

<menu xmlns:android="http://schemas.android.com/apk/res/android">
    <item
        android:id="@+id/menu_create"
        android:title="@string/menu_create"
        android:alphabeticShortcut="c” />
</menu/>

Alternatively, shortcuts can be defined using Activity#onKeyShortcut or View#onKeyShortcut. Learn more about keyboard actions here.

In MainActivity.java:

@Override
public boolean onKeyShortcut(int keyCode, KeyEvent event) {
    switch (keyCode) {
        case KeyEvent.KEYCODE_R:
            Toast.makeText(this, "Reply", Toast.LENGTH_SHORT).show();
            return true;
        default:
            return super.onKeyShortcut(keyCode, event);
    }
}
Responsive layouts with w- and sw- qualifiers

In order to take advantage of the screen real estate on the Nexus 6 and Nexus 9, we emphasize the importance of responsive design. In the past, if you assumed that landscape mode is significantly wider than portrait mode, you may run into problems on a device like the Nexus 9, which has an aspect ratio of 4:3. Instead of declaring layouts using the layout-land or layout-port resource folder qualifiers, we strongly recommend switching to the w<N>dp width resource folder qualifier so that content is laid out based on available screen width.

Think about content first and foremost. Decide on min and max screen real estate that your content requires, and determine cutoff points at different screen widths where you can modify the layout composition for your app (# of grid columns, multi-pane layout, etc…).

For example, a single pane layout for your main activity on phones can be defined in:

res/layout/activity_main.xml

On larger screen devices, where the current orientation is at least 600dp in width, a new two-pane layout with a list alongside a detail pane could be declared in:

res/layout-w600dp/activity_main.xml

On even larger screen devices, where the current orientation is at least 720dp in width, a new multi-pane layout where the detail pane requires even more horizontal space could be declared in:

res/layout-w720dp/activity_main.xml

As for attributes based on form factor, instead of declaring them in values-large or values-xlarge resource directories, use the sw<N>dp smallest width qualifier. For example, you could style your TextViews to have a medium font size on phones.

In res/values/styles.xml:

<style name="DescriptionTextStyle">
  <item name="android:textAppearance">?android:attr/textAppearanceMedium</item>
</style>

Meanwhile, TextViews could have a large font size when the smallest width of the device (taking the minimum of the landscape and portrait widths) is 600dp or wider. This ensures the font size of your app doesn’t change when you rotate this large screen device.

In res/values-sw600dp/styles.xml:

<style name="DescriptionTextStyle">
  <item name="android:textAppearance">?android:attr/textAppearanceLarge</item>
</style> 
Take advantage of 5.0 and Material

Set your android:targetSdkVersion to "21". Take note of the important behavior changes in Android 5.0 Lollipop including ART, the new Android runtime, to ensure that your app continues to run well. You can also leverage new platform APIs like richer notifications.

Nexus 6 and Nexus 9 users will be immersed in the new world of material design, and they’ll expect the same seamless transitions, bold colors, and delightful details from your app. As you invest time in bringing your app up to date with our latest design language, there’s a whole host of resources to help you make the leap, including important new updates to the support library, videos, and a getting started guide. Good luck and we can’t wait to see your apps!

Join the discussion on

+Android Developers
Categories: Programming

GPS on Android Wear Devices

Android Developers Blog - Fri, 10/24/2014 - 00:03

By Wayne Piekarski, Developer Advocate

With the latest release of Android Wear, wearables with built-in GPS like the Sony Smartwatch 3 can now give you a GPS location update directly from the wearable, without a paired phone nearby. You can now build an app like MyTracks that lets a user track their run even when they leave their phone at home. For wearable devices that do not have built-in GPS, a software solution has always existed in Google Play Services that automatically uses the GPS from your connected phone.

The Golfshot wearable app uses built-in GPS to calculate your distance to the next hole, even when you don’t have your phone with you.


Implementing GPS location updates

Implementing GPS location updates for Android Wear is simple. On the wearable, use the FusedLocationProviderApi from Google Play services to request location updates. This is the same API that has been available on mobile, so you can easily reuse your existing code and samples.

FusedLocationProviderApi automatically makes the most power-efficient decision about where to get location updates. If the phone is connected to the wearable, it uses the GPS on the phone and sends the updates to the wearable. If the phone is not connected to the wearable and the wearable has a built-in GPS, then it uses the wearable’s GPS.

One case you’ll need to handle is if the phone is not connected to the wearable and the wearable does not have built-in GPS. You will need to detect this and provide a graceful recovery mechanism, such as a message telling the user to bring their phone with them. However, for the most part, deciding which GPS to use, and sending the position from the phone to the wearable, is handled automatically. You do not need to deal with the low-level implementation details yourself.

Data synchronization

When writing an app that runs on the wearable, you will eventually want to synchronize the data it collects with the paired phone. When the wearable is being taken out for a run, especially with the built-in GPS, there may not be a phone present. So you will want to store your location data using the Data Layer API, and when the phone reconnects with the wearable later, the data will be automatically synchronized.

For more details about how to use the location API, check out the extensive documentation and sample here.

Android Wear apps on Google Play

Also, as a heads up, starting on November 3 with the public release of Android 5.0, you will be able to submit your apps for clearer designation as Android Wear apps on Google Play. If your apps follow the criteria in the Wear App Quality checklist and are accepted as Wear apps on Play, it will be easier for Android Wear users to discover your apps. Stay tuned for more information about how to submit your apps for Android Wear review through the Google Play Developer Console.

Join the discussion on

+Android Developers
Categories: Programming

What If We Could Weaponize Empathy?

Coding Horror - Jeff Atwood - Thu, 10/23/2014 - 19:43

One of my favorite insights on the subject of online community is from Tom Chick:

Here is something I've never articulated because I thought, perhaps naively, it was understood:

The priority for participating on this forum is not the quality of the content. I ultimately don't care how smart or funny or observant you are. Those are plusses, but they're never prerequisites. The priority is on how you treat each other. I expect spats, arguments, occasional insults, and even inevitable grudges. We've all done that. But in the end, I expect you to act like a group of friends who care about each other, no matter how dumb some of us might be, no matter what political opinions some of us hold, no matter what games some of us like or dislike. This community is small enough, intimate enough, that I feel it's a reasonable expectation.

Indeed, disagreement and arguments are inevitable and even healthy parts of any community. The difference between a sane community and a terrifying warzone is the degree to which disagreement is pursued in the community, gated by the level of respect community members have for each other.

In other words, if a fight is important to you, fight nasty. If that means lying, lie. If that means insults, insult. If that means silencing people, silence.

I may be a fan of the smackdown learning model and kayfabe, but I am definitely not a fan of fighting nasty.

I expect you to act like a group of friends who care about each other, no matter how dumb some of us might be, no matter what political opinions some of us hold, no matter what games some of us like or dislike.

There's a word for this: empathy.

One of the first things I learned when I began researching discussion platforms two years ago is the importance of empathy as the fundamental basis of all stable long term communities. The goal of discussion software shouldn't be to teach you how to click the reply button, and how to make bold text, but how to engage in civilized online discussion with other human beings without that discussion inevitably breaking down into the collective howling of wolves.

That's what the discussion software should be teaching you: Empathy.

You. Me. Us. We can all occasionally use a gentle reminder that there is a real human being on the other side of our screen, a person remarkably like us.

I've been immersed in the world of social discussion for two years now, and I keep going back to the well of empathy, time and time again. The first thing we did was start with a solid set of community guidelines on civilized discussion, and I'm proud to say that we ship and prominently feature those guidelines with every copy of Discourse. They are bedrock. But these guidelines only work to the extent that they are understood, and the community helps enforce them.

In Your Community Door, I described the danger of allowing cruel and hateful behavior in your community – behavior so obviously corrosive that it should never be tolerated in any quantity. If your community isn't capable of regularly exorcising the most toxic content, and the people responsible for that kind of content, it's in trouble. Those rare bad apples are group poison.

Hate is easy to recognize. Cruelty is easy to recognize. You do not tolerate these in your community, full stop.

But what about behavior that isn't so obviously corrosive? What about behavior patterns that seem sort of vaguely negative, but … nobody can show you exactly how this behavior is directly hurting anyone? What am I talking about? Take a look at the Flamewarriors Online Discussion Archetypes, a bunch of discussion behaviors that never quite run afoul of the rules, per se, but result in discussions that degenerate, go in circles, or make people not want to be around them.

What we're getting into is shades of grey, the really difficult part of community moderation. I've been working on Discourse long enough to identify some subtle dark patterns of community discussion that – while nowhere near as dangerous as hate and cruelty – are still harmful enough to the overall empathy level of a community that they should be actively recognized when they emerge, and interventions staged.

1. Endless Contrarianism

Disagreement is fine, even expected, provided people can disagree in an agreeable way. But when someone joins your community for the sole purpose of disagreeing, that's Endless Contrarianism.

Example: As an athiest, Edward shows up on a religion discussion area to educate everyone there about the futility of religion. Is that really the purpose of the community? Does anyone in the community expect to defend the very concept of religion while participating there?

If all a community member can seem to contribute is endlessly pointing out how wrong everyone else is, and how everything about this community is headed in the wrong direction – that's not building constructive discussion – or the community. Edward is just arguing for the sake of argument. Take it to debate school.

2. Axe-Grinding

Part of what makes discussion fun is that it's flexible; a variety of topics will be discussed, and those discussions may naturally meander a bit within the context defined by the site and whatever categories of discussion are allowed there. Axe-Grinding is when a user keeps constantly gravitating back to the same pet issue or theme for weeks or months on end.

Example: Sara finds any opportunity to trigger up a GMO debate, no matter what the actual topic is. Viewing Sara's post history, GMO and Monsanto are constant, repeated themes in any context. Sara's negative review of a movie will mention eating GMO popcorn, because it's not really about the movie – it's always about her pet issue.

This kind of inflexible, overbearing single-issue focus tends to drag discussion into strange, unwanted directions, and rapidly becomes tiresome to other participants who have probably heard everything this person has to say on that topic multiple times already. Either Sara needs to let that topic go, or she needs to find a dedicated place (e.g. GMO discussion areas) where others want to discuss it as much as she does, and take it there.

3. Griefing

In discussion, griefing is when someone goes out of their way to bait a particular person for weeks or months on end. By that I mean they pointedly follow them around, choosing to engage on whatever topic that person appears in, and needle the other person in any way they can, but always strictly by the book and not in violation of any rules… technically.

Example: Whenever Joe sees George in a discussion topic, Joe now pops in to represent the opposing position, or point out flaws in George's reasoning. Joe also takes any opportunity to remind people of previous mistakes George made, or times when George was rude.

When the discussion becomes more about the person than the topic, you're in deep trouble. It's not supposed to be about the participants, but the topic at hand. When griefing occurs, the discussion becomes a stage for personal conflict rather than a way to honestly explore topics and have an entertaining discussion. Ideally the root of the conflict between Joe and George can be addressed and resolved, or Joe can be encouraged to move on and leave the conflict behind. Otherwise, one of these users needs to find another place to go.

4. Persistent Negativity

Nobody expects discussions to be all sweetness and light, but neverending vitriol and negativity are giant wet blankets. It's hard to enjoy anything when someone's constantly reminding you how terrible the world is. Persistent negativity is when someone's negative contributions to the discussion far outweigh their positive contributions.

Example: Even long after the game shipped, Fred mentions that the game took far too long to ship, and that it shipped with bugs. He paid a lot of money for this game, and feels he didn't get the enjoyment from the game that was promised for the price. He warns people away from buying expansions because this game has a bad track record and will probably fail. Nobody will be playing it online soon because of all the problems, so why bother even trying? Wherever topics happen to go, Fred is there to tell everyone this game is worse than they knew.

If Fred doesn't have anything positive to contribute, what exactly is the purpose of his participation in that community? What does he hope to achieve? Criticism is welcome, but that shouldn't be the sum total of everything Fred contributes, and he should be reasonably constructive in his criticism. People join communities to build things and celebrate the enjoyment of those things, not have other people dump all over it and constantly describe how much they suck and disappoint them. If there isn't any silver lining in Fred's cloud, and he can't be encouraged to find one, he should be asked to find other places to haunt.

5. Ranting

Discussions are social, and thus emotional. You should feel something. But prolonged, extreme appeal to emotion is fatiguing and incites arguments. Nobody wants to join a dry, technical session at the Harvard Debate Club, because that'd be boring, but there is a big difference between a persuasive post and a straight-up rant.

Example: Holly posts at the extremes – either something is the worst thing that ever happened, or the best thing that ever happened. She will post 6 to 10 times in a topic and state her position as forcefully as possible, for as long and as loud as it takes, to as many individual people in the discussion as it takes, to get her point across. The stronger the language in the post, the better she likes it.

If Holly can't make her point in a reasonable way in one post and a followup, perhaps she should rethink her approach. Yelling at people, turning the volume to 11, and describing the situation in the most emotional, extreme terms possible to elicit a response – unless this really is the worst or best thing to happen in years – is a bit like yelling fire in a crowded theater. It's irresponsible. Either tone it down, or take it somewhere that everyone talks that way.

6. Grudges

In any discussion, there is a general expectation that everyone there is participating in good faith – that they have an open mind, no particular agenda, and no bias against the participants or the topic. While short term disagreement is fine, it's important that the people in your community have the ability to reset and approach each new topic with a clean(ish) slate. When you don't do that, when people carry ill will from previous discussions toward the participants or topic into new discussions, that's a grudge.

Example: Tad strongly disagrees with a decision the community made about not creating a new category to house some discussion he finds problematic. So he now views the other leaders in the community, and the moderators, with great distrust. Tad feels like the community has turned on him, and so he has soured on the community. But he has too much invested here to leave, so Tad now likes to point out all the consequences of this "bad" decision often, and cite it as an example of how the community is going wrong. He also follows another moderator, Steve, around because he views him as the ringleader of the original decision, and continually writes long, critical replies to his posts.

Grudges can easily lead to every other dark community pattern on this list. I cannot emphasize enough how important it is to recognize grudges when they emerge so the community can intervene and point out what's happening, and all the negative consequences of a grudge. It's important in the broadest general life sense not to hold grudges; as the famous quote goes (as near as I can tell, attributed to Alcoholics Anonymous)

Holding a grudge is like drinking poison and expecting the other person to die.

So your community should be educating itself about the danger of grudges, the root of so many other community problems. But it is critically important that moderators never, and I mean never ever, hold grudges. That'd be disastrous.

What can you do?

I made a joke in the title of this post about weaponizing empathy. I'm not sure that's even possible. But you can start by having clear community guidelines, teaching your community to close the door on overt hate, and watching out for any overall empathy erosion caused by the six dark community behavior patterns I outlined above.

At the risk of sounding aspirational, here's one thing I know to be true, and I advise every community to take to heart: I expect you to act like a group of friends who care about each other, no matter how dumb some of us might be, no matter what political opinions some of us hold, no matter what things some of us like or dislike.

[advertisement] Stack Overflow Careers matches the best developers (you!) with the best employers. You can search our job listings or create a profile and even let employers find you.
Categories: Programming

Basis of #NoEstimates are 27 Year Old Reports

Herding Cats - Glen Alleman - Thu, 10/23/2014 - 17:52

The #NoEstimates movement appears to be based on a 27 year old report† that provides examples of FORTRAN and PASCAL programs as the basis on which estimates is done. 

Screen Shot 2014-10-21 at 10.51.54 PM

 

A lot has happend since 1987. For a short crtiique on the Software Crisis report - which is referenced in the #NoEstimates argument, see "There is No Software Engineering Crisis."

  • Parametric modeling tools - the structure of software projects are constrained by the external structures of their components. These can be parameterized for estimating purposes.
  • COCOMO - is an example of a parametic estimating tool. There are several others.
  • Reference Class Forecasting models, have been developed as the result of overruns and disasters in many areas, including SWDev. But now we know and we know better not to succumb to all the biases discovered in the past.
  • Monte Carlo Simulation, using Reference Class Forecasting, there as simple and cheap tools, http://www.riskamp.com/ for example. For $125.00 all the handwaving around forecasting cost, schedule, and other project variables can be modeled with ease. 
  • Object Oriented Programming - old news but no more debugging of FORTRAN "Unnamed COMMON" overwriting floating point numbers!!
  • Component Based Software, where we can buy parts and assemble them.
  • SOA and CORBA (TIBCO) where ETL and Enterprise Bus are the part of the Enterprise Architecture. Stop writing application code and start writing scripts to integrate. BTW the example of having developers write database apps for what is essentially a warehousing app, has missed the COTS, component based solutions bus.
  • FPA, while a bit long in the tooth, but idea is still valid.
  • Databases of Reference Classes
  • The Web and Web components, same as above.
  • CORBA, same as above.
  • All the web based languages
  • All the runtime interpretative languages with built in debuggers, rather compiled code with stop dead runtime debugging. 
  • ERP and COTS products and components, with out of the box functions that remove the need to write any code. Configure the system, sure. Write some scripting code ABAP e.g., but no coding in the developer sense.
  • Software as a Service, where you can buy the solution. That was unheard of in 1986 and 1987.
  • DevOps, another unheard idea back then.
  • Open Source and Reuse

1000's of research and practicum books and papers on how to estimate software projects have be published. Maybe it's time to catch up with the 21st Century approach of estimating the time, cost, and capabilities needed to deliver value for those paying for our work. These approaches answer the mail in the 1987 report, along with the much referenced NATO Software Crisis report published in 1986.

While estimates have always been needed to make decisions in the paradigm of Microeconomics of software development, the techniques, tools, and data have improved dramatically in the last 27 years. Let's acknowldge that and start taking advantage of the efforts to improve our lot in life of being good stewards of other peoples money. And when we hear #Noestimates can be used to forecast completion times and costs at the end, test that idea with activities in the Baloney Claims check list.

————————————

† #NoEstimates is an approach to software development that arose from the observation that large amounts of time were spent over the years in estimating and improving those estimates, but we see no value from that investment. Indeed, according to scholars Conte, Dunmore and Shens [1] a good estimate is one that is within 25% of the actual cost, 75% of the time. in http://www.mystes.fi/agile2014/

As a small aside, that's not what the statement actually says in the context of statistical estimating. It says there is a 75% confidence that there will be on overage of 25% which needs to be covered with management reserve for 25% to protect the budget. Since all project work is probabilistic, uncertainty is both naturally occurring and event based. Event based uncertainty can be reduced by spending money. This is a core concept of Agile development. Do small things to discover what will and won't work. Naturally occurring uncertainty, can only be handled with margin. In this statement - remember it's 27 years old - there is a likelihood that a 25% management reserve will be needed 25% of the time there is a project estimate produced. If you know that ahead of time, it's won't be a disappointment when it occurs 25% of the time.

This is standard best management practice in mature organizations. In some domains, it's mandatory to have Management Reserve built from Monte Carlo Simulations using Reference Classes of past performance.

Related articles How to Estimate Software Development Software Requirements Are Vague How Not To Make Decisions Using Bad Estimates #NoEstimates? #NoProjects? #NoManagers? #NoJustNo
Categories: Project Management

When to Release a Product

Software Requirements Blog - Seilevel.com - Thu, 10/23/2014 - 17:00
One of the projects I’ve been working on over the past year has been particularity challenging; it’s one of those everything that can go wrong does go wrong projects. This is a back office product which automates a process updating data records. The updates are transactional. We were almost done and ready to release when […]
Categories: Requirements

Recap of the 2014 Google Cloud Platform Roadshows

Google Code Blog - Thu, 10/23/2014 - 17:00
We just wrapped up the Google Cloud Platform Roadshows, a series of developer events in 35 cities worldwide, where we reached nearly 4,500 developers spanning the globe, from Texas to Tel Aviv to Tokyo.

Now that the series is finished, we wanted to thank everyone for coming and share with you the slides and live recording of the talks from our New York City event:


The Roadshow team sends huge thanks to everyone who attended and looks forward to seeing you next year. We'd love to hear from you in the meantime:


Posted by Tom Van Waardhuizen, Program Manager
Categories: Programming

Software architecture sketching in Iceland

Coding the Architecture - Simon Brown - Thu, 10/23/2014 - 10:46

I'll be in Iceland next month for the Agile Iceland 2014 conference, which I'm really looking forward to as everybody tells me that Iceland is a fantastic country to visit. While in Iceland, I'll also be running my 1-day software architecture sketching workshop on the 6th of November. If you're interested in learning how to communicate the design of your software in a simple yet effective way without using lots of complex UML diagrams, please do join me. Everybody who attends will also get a copy of my Software Architecture for Developers ebook too. :-)

Categories: Architecture

Neo4j: Cypher – Avoiding the Eager

Mark Needham - Thu, 10/23/2014 - 06:56

Although I love how easy Cypher’s LOAD CSV command makes it to get data into Neo4j, it currently breaks the rule of least surprise in the way it eagerly loads in all rows for some queries even those using periodic commit.

Neverwhere

Beware of the eager pipe

This is something that my colleague Michael noted in the second of his blog posts explaining how to use LOAD CSV successfully:

The biggest issue that people ran into, even when following the advice I gave earlier, was that for large imports of more than one million rows, Cypher ran into an out-of-memory situation.

That was not related to commit sizes, so it happened even with PERIODIC COMMIT of small batches.

I recently spent a few days importing data into Neo4j on a Windows machine with 4GB RAM so I was seeing this problem even earlier than Michael suggested.

Michael explains how to work out whether your query is suffering from unexpected eager evaluation:

If you profile that query you see that there is an “Eager” step in the query plan.

That is where the “pull in all data” happens.

You can profile queries by prefixing the word ‘PROFILE’. You’ll need to run your query in the console of /webadmin in your web browser or with the Neo4j shell.

I did this for my queries and was able to identify query patterns which get evaluated eagerly and in some cases we can work around it.

We’ll use the Northwind data set to demonstrate how the Eager pipe can sneak into our queries but keep in mind that this data set is sufficiently small to not cause issues.

This is what a row in the file looks like:

$ head -n 2 data/customerDb.csv
OrderID,CustomerID,EmployeeID,OrderDate,RequiredDate,ShippedDate,ShipVia,Freight,ShipName,ShipAddress,ShipCity,ShipRegion,ShipPostalCode,ShipCountry,CustomerID,CustomerCompanyName,ContactName,ContactTitle,Address,City,Region,PostalCode,Country,Phone,Fax,EmployeeID,LastName,FirstName,Title,TitleOfCourtesy,BirthDate,HireDate,Address,City,Region,PostalCode,Country,HomePhone,Extension,Photo,Notes,ReportsTo,PhotoPath,OrderID,ProductID,UnitPrice,Quantity,Discount,ProductID,ProductName,SupplierID,CategoryID,QuantityPerUnit,UnitPrice,UnitsInStock,UnitsOnOrder,ReorderLevel,Discontinued,SupplierID,SupplierCompanyName,ContactName,ContactTitle,Address,City,Region,PostalCode,Country,Phone,Fax,HomePage,CategoryID,CategoryName,Description,Picture
10248,VINET,5,1996-07-04,1996-08-01,1996-07-16,3,32.38,Vins et alcools Chevalier,59 rue de l'Abbaye,Reims,,51100,France,VINET,Vins et alcools Chevalier,Paul Henriot,Accounting Manager,59 rue de l'Abbaye,Reims,,51100,France,26.47.15.10,26.47.15.11,5,Buchanan,Steven,Sales Manager,Mr.,1955-03-04,1993-10-17,14 Garrett Hill,London,,SW1 8JR,UK,(71) 555-4848,3453,\x,"Steven Buchanan graduated from St. Andrews University, Scotland, with a BSC degree in 1976.  Upon joining the company as a sales representative in 1992, he spent 6 months in an orientation program at the Seattle office and then returned to his permanent post in London.  He was promoted to sales manager in March 1993.  Mr. Buchanan has completed the courses ""Successful Telemarketing"" and ""International Sales Management.""  He is fluent in French.",2,http://accweb/emmployees/buchanan.bmp,10248,11,14,12,0,11,Queso Cabrales,5,4,1 kg pkg.,21,22,30,30,0,5,Cooperativa de Quesos 'Las Cabras',Antonio del Valle Saavedra,Export Administrator,Calle del Rosal 4,Oviedo,Asturias,33007,Spain,(98) 598 76 54,,,4,Dairy Products,Cheeses,\x
MERGE, MERGE, MERGE

The first thing we want to do is create a node for each employee and each order and then create a relationship between them.

We might start with the following query:

USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
MERGE (employee:Employee {employeeId: row.EmployeeID})
MERGE (order:Order {orderId: row.OrderID})
MERGE (employee)-[:SOLD]->(order)

This does the job but if we profile the query like so…

PROFILE LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
WITH row LIMIT 0
MERGE (employee:Employee {employeeId: row.EmployeeID})
MERGE (order:Order {orderId: row.OrderID})
MERGE (employee)-[:SOLD]->(order)

…we’ll notice an ‘Eager’ lurking on the third line:

==> +----------------+------+--------+----------------------------------+-----------------------------------------+
==> |       Operator | Rows | DbHits |                      Identifiers |                                   Other |
==> +----------------+------+--------+----------------------------------+-----------------------------------------+
==> |    EmptyResult |    0 |      0 |                                  |                                         |
==> | UpdateGraph(0) |    0 |      0 |    employee, order,   UNNAMED216 |                            MergePattern |
==> |          Eager |    0 |      0 |                                  |                                         |
==> | UpdateGraph(1) |    0 |      0 | employee, employee, order, order | MergeNode; :Employee; MergeNode; :Order |
==> |          Slice |    0 |      0 |                                  |                            {  AUTOINT0} |
==> |        LoadCSV |    1 |      0 |                              row |                                         |
==> +----------------+------+--------+----------------------------------+-----------------------------------------+

You’ll notice that when we profile each query we’re stripping off the periodic commit section and adding a ‘WITH row LIMIT 0′. This allows us to generate enough of the query plan to identify the ‘Eager’ operator without actually importing any data.

We want to split that query into two so it can be processed in a non eager manner:

USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
WITH row LIMIT 0
MERGE (employee:Employee {employeeId: row.EmployeeID})
MERGE (order:Order {orderId: row.OrderID})
==> +-------------+------+--------+----------------------------------+-----------------------------------------+
==> |    Operator | Rows | DbHits |                      Identifiers |                                   Other |
==> +-------------+------+--------+----------------------------------+-----------------------------------------+
==> | EmptyResult |    0 |      0 |                                  |                                         |
==> | UpdateGraph |    0 |      0 | employee, employee, order, order | MergeNode; :Employee; MergeNode; :Order |
==> |       Slice |    0 |      0 |                                  |                            {  AUTOINT0} |
==> |     LoadCSV |    1 |      0 |                              row |                                         |
==> +-------------+------+--------+----------------------------------+-----------------------------------------+

Now that we’ve created the employees and orders we can join them together:

USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
MATCH (employee:Employee {employeeId: row.EmployeeID})
MATCH (order:Order {orderId: row.OrderID})
MERGE (employee)-[:SOLD]->(order)
==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+
==> |       Operator | Rows | DbHits |                   Identifiers |                                                     Other |
==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+
==> |    EmptyResult |    0 |      0 |                               |                                                           |
==> |    UpdateGraph |    0 |      0 | employee, order,   UNNAMED216 |                                              MergePattern |
==> |      Filter(0) |    0 |      0 |                               |          Property(order,orderId) == Property(row,OrderID) |
==> | NodeByLabel(0) |    0 |      0 |                  order, order |                                                    :Order |
==> |      Filter(1) |    0 |      0 |                               | Property(employee,employeeId) == Property(row,EmployeeID) |
==> | NodeByLabel(1) |    0 |      0 |            employee, employee |                                                 :Employee |
==> |          Slice |    0 |      0 |                               |                                              {  AUTOINT0} |
==> |        LoadCSV |    1 |      0 |                           row |                                                           |
==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+

Not an Eager in sight!

MATCH, MATCH, MATCH, MERGE, MERGE

If we fast forward a few steps we may now have refactored our import script to the point where we create our nodes in one query and the relationships in another query.

Our create query works as expected:

USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
MERGE (employee:Employee {employeeId: row.EmployeeID})
MERGE (order:Order {orderId: row.OrderID})
MERGE (product:Product {productId: row.ProductID})
==> +-------------+------+--------+----------------------------------------------------+--------------------------------------------------------------+
==> |    Operator | Rows | DbHits |                                        Identifiers |                                                        Other |
==> +-------------+------+--------+----------------------------------------------------+--------------------------------------------------------------+
==> | EmptyResult |    0 |      0 |                                                    |                                                              |
==> | UpdateGraph |    0 |      0 | employee, employee, order, order, product, product | MergeNode; :Employee; MergeNode; :Order; MergeNode; :Product |
==> |       Slice |    0 |      0 |                                                    |                                                 {  AUTOINT0} |
==> |     LoadCSV |    1 |      0 |                                                row |                                                              |
==> +-------------+------+--------+----------------------------------------------------+------------------------------------------------------------

We’ve now got employees, products and orders in the graph. Now let’s create relationships between the trio:

USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
MATCH (employee:Employee {employeeId: row.EmployeeID})
MATCH (order:Order {orderId: row.OrderID})
MATCH (product:Product {productId: row.ProductID})
MERGE (employee)-[:SOLD]->(order)
MERGE (order)-[:PRODUCT]->(product)

If we profile that we’ll notice Eager has sneaked in again!

==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+
==> |       Operator | Rows | DbHits |                   Identifiers |                                                     Other |
==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+
==> |    EmptyResult |    0 |      0 |                               |                                                           |
==> | UpdateGraph(0) |    0 |      0 |  order, product,   UNNAMED318 |                                              MergePattern |
==> |          Eager |    0 |      0 |                               |                                                           |
==> | UpdateGraph(1) |    0 |      0 | employee, order,   UNNAMED287 |                                              MergePattern |
==> |      Filter(0) |    0 |      0 |                               |    Property(product,productId) == Property(row,ProductID) |
==> | NodeByLabel(0) |    0 |      0 |              product, product |                                                  :Product |
==> |      Filter(1) |    0 |      0 |                               |          Property(order,orderId) == Property(row,OrderID) |
==> | NodeByLabel(1) |    0 |      0 |                  order, order |                                                    :Order |
==> |      Filter(2) |    0 |      0 |                               | Property(employee,employeeId) == Property(row,EmployeeID) |
==> | NodeByLabel(2) |    0 |      0 |            employee, employee |                                                 :Employee |
==> |          Slice |    0 |      0 |                               |                                              {  AUTOINT0} |
==> |        LoadCSV |    1 |      0 |                           row |                                                           |
==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+

In this case the Eager happens on our second call to MERGE as Michael identified in his post:

The issue is that within a single Cypher statement you have to isolate changes that affect matches further on, e.g. when you CREATE nodes with a label that are suddenly matched by a later MATCH or MERGE operation.

We can work around the problem in this case by having separate queries to create the relationships:

LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
MATCH (employee:Employee {employeeId: row.EmployeeID})
MATCH (order:Order {orderId: row.OrderID})
MERGE (employee)-[:SOLD]->(order)
==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+
==> |       Operator | Rows | DbHits |                   Identifiers |                                                     Other |
==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+
==> |    EmptyResult |    0 |      0 |                               |                                                           |
==> |    UpdateGraph |    0 |      0 | employee, order,   UNNAMED236 |                                              MergePattern |
==> |      Filter(0) |    0 |      0 |                               |          Property(order,orderId) == Property(row,OrderID) |
==> | NodeByLabel(0) |    0 |      0 |                  order, order |                                                    :Order |
==> |      Filter(1) |    0 |      0 |                               | Property(employee,employeeId) == Property(row,EmployeeID) |
==> | NodeByLabel(1) |    0 |      0 |            employee, employee |                                                 :Employee |
==> |          Slice |    0 |      0 |                               |                                              {  AUTOINT0} |
==> |        LoadCSV |    1 |      0 |                           row |                                                           |
==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+
USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
MATCH (order:Order {orderId: row.OrderID})
MATCH (product:Product {productId: row.ProductID})
MERGE (order)-[:PRODUCT]->(product)
==> +----------------+------+--------+------------------------------+--------------------------------------------------------+
==> |       Operator | Rows | DbHits |                  Identifiers |                                                  Other |
==> +----------------+------+--------+------------------------------+--------------------------------------------------------+
==> |    EmptyResult |    0 |      0 |                              |                                                        |
==> |    UpdateGraph |    0 |      0 | order, product,   UNNAMED229 |                                           MergePattern |
==> |      Filter(0) |    0 |      0 |                              | Property(product,productId) == Property(row,ProductID) |
==> | NodeByLabel(0) |    0 |      0 |             product, product |                                               :Product |
==> |      Filter(1) |    0 |      0 |                              |       Property(order,orderId) == Property(row,OrderID) |
==> | NodeByLabel(1) |    0 |      0 |                 order, order |                                                 :Order |
==> |          Slice |    0 |      0 |                              |                                           {  AUTOINT0} |
==> |        LoadCSV |    1 |      0 |                          row |                                                        |
==> +----------------+------+--------+------------------------------+--------------------------------------------------------+
MERGE, SET

I try to make LOAD CSV scripts as idempotent as possible so that if we add more rows or columns of data to our CSV we can rerun the query without having to recreate everything.

This can lead you towards the following pattern where we’re creating suppliers:

USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
MERGE (supplier:Supplier {supplierId: row.SupplierID})
SET supplier.companyName = row.SupplierCompanyName

We want to ensure that there’s only one Supplier with that SupplierID but we might be incrementally adding new properties and decide to just replace everything by using the ‘SET’ command. If we profile that query, the Eager lurks:

==> +----------------+------+--------+--------------------+----------------------+
==> |       Operator | Rows | DbHits |        Identifiers |                Other |
==> +----------------+------+--------+--------------------+----------------------+
==> |    EmptyResult |    0 |      0 |                    |                      |
==> | UpdateGraph(0) |    0 |      0 |                    |          PropertySet |
==> |          Eager |    0 |      0 |                    |                      |
==> | UpdateGraph(1) |    0 |      0 | supplier, supplier | MergeNode; :Supplier |
==> |          Slice |    0 |      0 |                    |         {  AUTOINT0} |
==> |        LoadCSV |    1 |      0 |                row |                      |
==> +----------------+------+--------+--------------------+----------------------+

We can work around this at the cost of a bit of duplication using ‘ON CREATE SET’ and ‘ON MATCH SET':

USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
MERGE (supplier:Supplier {supplierId: row.SupplierID})
ON CREATE SET supplier.companyName = row.SupplierCompanyName
ON MATCH SET supplier.companyName = row.SupplierCompanyName
==> +-------------+------+--------+--------------------+----------------------+
==> |    Operator | Rows | DbHits |        Identifiers |                Other |
==> +-------------+------+--------+--------------------+----------------------+
==> | EmptyResult |    0 |      0 |                    |                      |
==> | UpdateGraph |    0 |      0 | supplier, supplier | MergeNode; :Supplier |
==> |       Slice |    0 |      0 |                    |         {  AUTOINT0} |
==> |     LoadCSV |    1 |      0 |                row |                      |
==> +-------------+------+--------+--------------------+----------------------+

With the data set I’ve been working with I was able to avoid OutOfMemory exceptions in some cases and reduce the amount of time it took to run the query by a factor of 3 in others.

As time goes on I expect all of these scenarios will be addressed but as of Neo4j 2.1.5 these are the patterns that I’ve identified as being overly eager.

If you know of any others do let me know and I can add them to the post or write a second part.

Categories: Programming

The Metrics Minute: Velocity

Zoom!

Zoom!

Audio Version:  Software Process and Measurement Cast 119.

Definition:

The simple definition of velocity is the amount of work that is completed in a period of time (typically a sprint). The definition is related to productivity, which is the amount of effort required to complete a unit of work and delivery rate (the speed that the work is completed).  The inclusion of a time box (the sprint) creates a fixed duratio,n which transforms velocity into more of a productivity metric than a speed metric (how much work can be be done in a specific timescale by a specific team). Therefore to truly measure velocity you need to estimate the units of work completed, have a definition of complete and have a time box.

The definition of done in Agile is typically functional code, however I think the definition can be stretched to reflect the terminal deliverable the sprint team has committed to create and based on the definition of done (for example requirements for a sprint team working on requirements or completed test cases in a test sprint) that the team has established.

Many Agile projects use the concept of story points as a metaphor for size or functional code. Note other functional size measures can be just as easily used. Examples in this paper will use story points as a unit of measure.  What is not size however is effort or duration. Effort is an input that is consumed while transforming ideas into functional code. The amount of effort required for the transformation is a reflection of size, complexity and other factors. Duration like effort is consumed by a sprint not created therefore does not measure what is delivered.

Formula

To calculate velocity, simply add up the size estimates of the features (user stories, requirements, backlog items, etc.) successfully delivered in an iteration.  The use of the size estimates allows the team to distinguish between items of differing levels of granularity.  Successfully delivered should equate to the definition of done.

Velocity = Story Points Completed Per Sprint

And:

Average velocity = Average Number of Story Points Per Sprint

The formula becomes more complex if staffing varies between sprints (and potentially less valuable as a predictive measure).  In order to account for variable staffing the velocity formula would have to be modified as follows:

velocity per person = sum (size of completed features in a sprint / number of people) / number of sprints or observations

To be really precise (not necessarily more accurate) we would have to understand the variability of the data as variability would help define level of confidence.  Variability generated by differences in team member capabilities is one of the reasons that predicability is enhanced by team stability. As you can see, the more complex the environmental scenario becomes, the less simple the math must be to describe the scenario.

Uses:

Velocity is used as a tool in project planning and reporting. Velocity is used in planning to predict how much work will be completed in a sprint and in reporting to communicate what has been done.

When used for planning and estimation the team’s velocity is used along with a prioritized set of granular features (e.g., user stories, backlog items, requirements, etc.) that have been sized or estimated.  The team uses these factors to select what can be done in the upcoming sprint. When the sprint is complete the results are used to update velocity for the next sprint. This is a top down estimation process using historical data.

Over a number of sprints velocity can be used both as a macro planning tool (when will will the project be done) and a reporting tool (we planned at this velocity and are delivering at this velocity).

Velocity can be used in all methodologies and because it is team specific, it is agnostic in terms of units of size.

Issues

As with all metrics, velocity has it’s share of issues.

The first is that there is an expectation of team stability inherent in the metric. Velocity is impacted by team size and composition and without collecting additional attributes and correlating these attributes to performance, change is not predictable (except by gut feel or Ouija Board). There should always be notes kept on team size and capability so that you can understand your data over time.

Similarly team dynamics change over time, sometimes radically. Radical changes in  team dynamics will affect velocity. Note shocks to any system of work are apt to create the same issue. Measurement personnel, SCRUM masters and team leaders need to be aware of people’s personalities and how they change over time.

The first time application of velocity requires either historical data of other similar teams and projects or an estimate. In a perfect world a few sprints would be executed and data gathered before expectations are set however generally clients want an idea of if a project will be completed, when it will be completed and the functions that will be delivered along the way.  Estimates of velocity based on the teams knowledge of the past or other crowd sourcing techniques are relatively safe starting points assuming continuos recalibration.

The final issue is the requirement for a good definition of done. Done is a concept that has been driven home in the agile community. To quote Mayank Gupta (http://www.scrumalliance.org/articles/106-definition-of-done-a-reference), “An explicit and concrete definition of done may seem small but it can be the most critical checkpoint of an agile project.”  A concrete definition of done provides the basis for estimating velocity by reducing variability based on features that are in different states of completion.  Done also focuses the team by providing a goal to pursue. Make sure you have a crisp definition of done and recognize how that definition can change from sprint to sprint.

Related Metrics:

Productivity (size / effort)

Delivery Rate (duration / size)

Criticisms:

The first criticism of velocity is that the metric is not comparable between teams and by Inference is not useful as a benchmark. Velocity was conceived as a tool for Scrum Masters and Team Leads to manage and plan individual sprints. There are no overarching set of rules for the metric to enforce standardization therefore one velocity is apt to reflect something different than the next. The criticism is correct but perhaps off the mark. As a team level tool velocity works because it is very easy to use and can be consistent, adding the complexity of standards and rules to make it more organizational will by definition reduce the simplicity and therefore the usefulness at the team level.

A second criticism is that estimates and budgets are typically set early in a projects life.  Team level velocity may well be an unknown until later.  The dichotomy between estimating and planning (or budgeting and estimating for that matter) is often overlooked.  Estimates developed early in a project or in projects with multiple teams require different techniques to generate. In large projects applying team level velocities requires using techniques more akin to portfolio management which add significant levels of overhead. I would suggest that velocity is more valuable as a team planning tool than as a budgeting or estimation tool at a macro level.

A final criticism is that backlog items may not be defined at consistent level of granularity therefore when applied, velocity may deliver inconsistent results. I tend to dismiss this criticism as it is true for any mechanism that relies on relative sizing. Team consistency will help reduce the variability in sizing however all teams should strive to break backlog items into as atomic stories as possible.


Categories: Process Management

AppCompat v21 — Material Design for Pre-Lollipop Devices!

Android Developers Blog - Wed, 10/22/2014 - 22:58

By Chris Banes, Android Developer Relations

The Android 5.0 SDK was released last Friday, featuring new UI widgets and material design, our visual language focused on good design. To enable you to bring your latest designs to older Android platforms we have expanded our support libraries, including a major update to AppCompat, as well as new RecyclerView, CardView and Palette libraries.

In this post we'll take a look at what’s new in AppCompat and how you can use it to support material design in your apps.

What's new in AppCompat?

AppCompat (aka ActionBarCompat) started out as a backport of the Android 4.0 ActionBar API for devices running on Gingerbread, providing a common API layer on top of the backported implementation and the framework implementation. AppCompat v21 delivers an API and feature-set that is up-to-date with Android 5.0

In this release, Android introduces a new Toolbar widget. This is a generalization of the Action Bar pattern that gives you much more control and flexibility. Toolbar is a view in your hierarchy just like any other, making it easier to interleave with the rest of your views, animate it, and react to scroll events. You can also set it as your Activity’s action bar, meaning that your standard options menu actions will be display within it.

You’ve likely already been using the latest update to AppCompat for a while, it has been included in various Google app updates over the past few weeks, including Play Store and Play Newsstand. It has also been integrated into the Google I/O Android app, pictured above, which is open-source.

Setup

If you’re using Gradle, add appcompat as a dependency in your build.gradle file:

dependencies {
    compile "com.android.support:appcompat-v7:21.0.+"
}
New integration

If you are not currently using AppCompat, or you are starting from scratch, here's how to set it up:

  • All of your Activities must extend from ActionBarActivity, which extends from FragmentActivity from the v4 support library, so you can continue to use fragments.
  • All of your themes (that want an Action Bar/Toolbar) must inherit from Theme.AppCompat. There are variants available, including Light and NoActionBar.
  • When inflating anything to be displayed on the action bar (such as a SpinnerAdapter for list navigation in the toolbar), make sure you use the action bar’s themed context, retrieved via getSupportActionBar().getThemedContext().
  • You must use the static methods in MenuItemCompat for any action-related calls on a MenuItem.

For more information, see the Action Bar API guide which is a comprehensive guide on AppCompat.

Migration from previous setup

For most apps, you now only need one theme declaration, in values/:

values/themes.xml:

<style name="Theme.MyTheme" parent="Theme.AppCompat.Light">
    <!-- Set AppCompat’s actionBarStyle -->
    <item name="actionBarStyle">@style/MyActionBarStyle</item>

    <!-- Set AppCompat’s color theming attrs -->
    <item name=”colorPrimary”>@color/my_awesome_red</item>
    <item name=”colorPrimaryDark”>@color/my_awesome_darker_red</item>
    
    <!-- The rest of your attributes -->
</style>

You can now remove all of your values-v14+ Action Bar styles.

Theming

AppCompat has support for the new color palette theme attributes which allow you to easily customize your theme to fit your brand with primary and accent colors. For example:

values/themes.xml:

<style name="Theme.MyTheme" parent="Theme.AppCompat.Light">
    <!-- colorPrimary is used for the default action bar background -->
    <item name=”colorPrimary”>@color/my_awesome_color</item>

    <!-- colorPrimaryDark is used for the status bar -->
    <item name=”colorPrimaryDark”>@color/my_awesome_darker_color</item>

    <!-- colorAccent is used as the default value for colorControlActivated,
         which is used to tint widgets -->
    <item name=”colorAccent”>@color/accent</item>

    <!-- You can also set colorControlNormal, colorControlActivated
         colorControlHighlight, and colorSwitchThumbNormal. -->
    
</style>

When you set these attributes, AppCompat automatically propagates their values to the framework attributes on API 21+. This automatically colors the status bar and Overview (Recents) task entry.

On older platforms, AppCompat emulates the color theming where possible. At the moment this is limited to coloring the action bar and some widgets.

Widget tinting

When running on devices with Android 5.0, all of the widgets are tinted using the color theme attributes we just talked about. There are two main features which allow this on Lollipop: drawable tinting, and referencing theme attributes (of the form ?attr/foo) in drawables.

AppCompat provides similar behaviour on earlier versions of Android for a subset of UI widgets:

You don’t need to do anything special to make these work, just use these controls in your layouts as usual and AppCompat will do the rest (with some caveats; see the FAQ below).

Toolbar Widget

Toolbar is fully supported in AppCompat and has feature and API parity with the framework widget. In AppCompat, Toolbar is implemented in the android.support.v7.widget.Toolbar class. There are two ways to use Toolbar:

  • Use a Toolbar as an Action Bar when you want to use the existing Action Bar facilities (such as menu inflation and selection, ActionBarDrawerToggle, and so on) but want to have more control over its appearance.
  • Use a standalone Toolbar when you want to use the pattern in your app for situations that an Action Bar would not support; for example, showing multiple toolbars on the screen, spanning only part of the width, and so on.
Action Bar

To use Toolbar as an Action Bar, first disable the decor-provided Action Bar. The easiest way is to have your theme extend from Theme.AppCompat.NoActionBar (or its light variant).

Second, create a Toolbar instance, usually via your layout XML:

<android.support.v7.widget.Toolbar
    android:id=”@+id/my_awesome_toolbar”
    android:layout_height=”wrap_content”
    android:layout_width=”match_parent”
    android:minHeight=”?attr/actionBarSize”
    android:background=”?attr/colorPrimary” />

The height, width, background, and so on are totally up to you; these are just good examples. As Toolbar is just a ViewGroup, you can style and position it however you want.

Then in your Activity or Fragment, set the Toolbar to act as your Action Bar:

@Override
public void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.blah);

    Toolbar toolbar = (Toolbar) findViewById(R.id.my_awesome_toolbar);
    setSupportActionBar(toolbar);
}

From this point on, all menu items are displayed in your Toolbar, populated via the standard options menu callbacks.

Standalone

The difference in standalone mode is that you do not set the Toolbar to act as your action bar. For this reason, you can use any AppCompat theme and you do not need to disable the decor-provided Action Bar.

In standalone mode, you need to manually populate the Toolbar with content/actions. For instance, if you want it to display actions, you need to inflate a menu into it:

@Override
public void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.blah);

    Toolbar toolbar = (Toolbar) findViewById(R.id.my_awesome_toolbar);

    // Set an OnMenuItemClickListener to handle menu item clicks
    toolbar.setOnMenuItemClickListener(new Toolbar.OnMenuItemClickListener() {
        @Override
        public boolean onMenuItemClick(MenuItem item) {
            // Handle the menu item
            return true;
        }
    });

    // Inflate a menu to be displayed in the toolbar
    toolbar.inflateMenu(R.menu.your_toolbar_menu);
}

There are many other things you can do with Toolbar. For more information, see the Toolbar API reference.

Styling

Styling of Toolbar is done differently to the standard action bar, and is set directly onto the view.

Here's a basic style you should be using when you're using a Toolbar as your action bar:

<android.support.v7.widget.Toolbar  
    android:layout_height="wrap_content"
    android:layout_width="match_parent"
    android:minHeight="?attr/actionBarSize"
    app:theme="@style/ThemeOverlay.AppCompat.ActionBar" />

The app:theme declaration will make sure that your text and items are using solid colors (i.e 100% opacity white).

DarkActionBar

You can style Toolbar instances directly using layout attributes. To achieve a Toolbar which looks like 'DarkActionBar' (dark content, light overflow menu), provide the theme and popupTheme attributes:

<android.support.v7.widget.Toolbar
    android:layout_height=”wrap_content”
    android:layout_width=”match_parent”
    android:minHeight=”@dimen/triple_height_toolbar”
    app:theme="@style/ThemeOverlay.AppCompat.Dark.ActionBar"
    app:popupTheme="@style/ThemeOverlay.AppCompat.Light" />
SearchView Widget

AppCompat offers Lollipop’s updated SearchView API, which is far more customizable and styleable (queue the applause). We now use the Lollipop style structure instead of the old searchView* theme attributes.

Here’s how you style SearchView:

values/themes.xml:
<style name=”Theme.MyTheme” parent=”Theme.AppCompat”>
    <item name=”searchViewStyle”>@style/MySearchViewStyle</item>
</style>
<style name=”MySearchViewStyle” parent=”Widget.AppCompat.SearchView”>
    <!-- Background for the search query section (e.g. EditText) -->
    <item name="queryBackground">...</item>
    <!-- Background for the actions section (e.g. voice, submit) -->
    <item name="submitBackground">...</item>
    <!-- Close button icon -->
    <item name="closeIcon">...</item>
    <!-- Search button icon -->
    <item name="searchIcon">...</item>
    <!-- Go/commit button icon -->
    <item name="goIcon">...</item>
    <!-- Voice search button icon -->
    <item name="voiceIcon">...</item>
    <!-- Commit icon shown in the query suggestion row -->
    <item name="commitIcon">...</item>
    <!-- Layout for query suggestion rows -->
    <item name="suggestionRowLayout">...</item>
</style>

You do not need to set all (or any) of these, the defaults will work for the majority of apps.

Toolbar is coming...

Hopefully this post will help you get up and running with AppCompat and let you create some awesome material apps. Let us know in the comments/G+/Twitter if you’re have questions about AppCompat or any of the support libraries, or where we could provide more documentation.

FAQ
Why is my EditText (or other widget listed above) not being tinted correctly on my pre-Lollipop device?

The widget tinting in AppCompat works by intercepting any layout inflation and inserting a special tint-aware version of the widget in its place. For most people this will work fine, but I can think of a few scenarios where this won’t work, including:

  • You have your own custom version of the widget (i.e. you’ve extended EditText)
  • You are creating the EditText without a LayoutInflater (i.e., calling new EditText()).

The special tint-aware widgets are currently hidden as they’re an unfinished implementation detail. This may change in the future.

Why has X widget not been material-styled when running on pre-Lollipop?
Only some of the most common widgets have been updated so far. There are more coming in future releases of AppCompat.
Why does my Action Bar have a shadow on Android Lollipop? I’ve set android:windowContentOverlay to null.
On Lollipop, the action bar shadow is provided using the new elevation API. To remove it, either call getSupportActionBar().setElevation(0), or set the elevation attribute in your Action Bar style.
Why are there no ripples on pre-Lollipop?
A lot of what allows RippleDrawable to run smoothly is Android 5.0’s new RenderThread. To optimize for performance on previous versions of Android, we've left RippleDrawable out for now.
How do I use AppCompat with Preferences?
You can continue to use PreferenceFragment in your ActionBarActivity when running on an API v11+ device. For devices before that, you will need to provide a normal PreferenceActivity which is not material-styled.
Join the discussion on

+Android Developers
Categories: Programming

Episode 212: Randy Shoup on Company Culture

Tobias Kaatz talks to former Kixeye CTO Randy Shoup about company culture in the software industry in this sequel to the show on hiring in the software industry (Episode 208). Prior to Kixeye, Randy worked as director of engineering at Google for the Google App Engine and as chief engineer and distinguished architect at eBay. […]
Categories: Programming

Paper: Actor Model of Computation: Scalable Robust Information Systems

With Reactive Systems becoming the new old hotness, it will help to have a thorough grounding in the Actor Model. Here's a good start. Carl Hewitt in Actor Model of Computation: Scalable Robust Information Systems gives a very thorough and relatively concise explanation of the Actor model.

Here's the abstract.

The Actor model is a mathematical theory that treats "Actors" as the universal primitives of concurrent digital computation. The model has been used both as a framework for a theoretical understanding of concurrency, and as the theoretical basis for several practical implementations of concurrent systems. Unlike previous models of computation, the Actor model was inspired by physical laws. It was also influenced by the programming languages Lisp, Simula 67 and Smalltalk-72, as well as ideas for Petri Nets, capability-based systems and packet switching. The advent of massive concurrency through client-cloud computing and many-core computer architectures has galvanized interest in the Actor model.


Actor technology will see significant application for integrating all kinds of digital information for individuals, groups, and organizations so their information usefully links together. Information integration needs to make use of the following information system principles:
    * Persistence. Information is collected and indexed.
    * Concurrency: Work proceeds interactively and concurrently, overlapping in time.
    * Quasi-commutativity: Information can be used regardless of whether it initiates new work or become relevant to ongoing work.
    * Sponsorship: Sponsors provide resources for computation, i.e., processing, storage, and communications.
    * Pluralism: Information is heterogeneous, overlapping and often inconsistent.
    * Provenance: The provenance of information is carefully tracked and recorded

The Actor Model is intended to provide a foundation for inconsistency robust information integration.

Related Articles
Categories: Architecture

Baloney Claims: Pseudo–science and the Art of Software Methods

Herding Cats - Glen Alleman - Wed, 10/22/2014 - 16:40

Ascertaining of the success and applicability of any claims made that are outside the accepted practices of business, engineering, or governance processes require careful testing of ideas through tangible evidence they are actually going to do what it is conjectured they're suppose to do.

The structure of this checklist is taken directly from Scientific American's essay on scientific baloney, but sure feels right for many of the outrageous claims found in today's software development community about approaches to estimating the cost, schedule, and likely outcomes.

How reliable is the 
source of the claim?

Self-pronounced experts often appear credible at first glance, but when examined more closely, the facts and figures they cite are distorted, taken out of context, long out of date, mathematically wrong, missing critical domain and context basis, or occasionally even fabricated.

In many instances the data used to support the claims are weak or poorly formed. Relying on surveys of friends or hearsay, small population samples, classroom experiments, or worse anecdotal evidence where the expert extends personal experience to a larger population.

Does this source often make similar claims?

Self pronounced experts have a habit of going well beyond the facts and generalizing their claims to a larger population of problems or domains. Many proponents of ideas make claims that cannot be substantiated within a testable framework. This is the nature of early development  in the engineering world. Of course, some great thinkers do frequently go beyond the data in their creative speculations.

But when those creative thinkers are used to support the new claims it's more suspect the hard work of testing the claim outside of personal experience hasn't been performed. 

They said agile wouldn't work, so my conjecture is getting the same criticism and I'll be considered just like those guys when I'm proven right.

Have the claims been verified by another source?
 

Typically self pronounced experts make statements that are unverified or verified only by a source within their own private circle, or who's conclusions are based primarily on anecdotal information.

We must ask, who is checking the claims, and even who is checking the checkers? Outside verification is crucial to good business decisions as it is crucial to good methodology development.

How does the claim fit with what we
know about how the world works? 

 

Any specific claim must be placed into a larger context to see how it fits. When people claim that a specific method, approach, or technique results in significant benefits, dramatic changes in an outcome, etc. they are usually not presenting the specific context for the application of their idea.

Such a claim is typically not supported by quantitative statistics as well. There may be qualitative data, but this is likely to be biased by the experimental method as well as the underlying population of the sample statistics.

In most cases to date, the sample size is minuscule compared to that needed to draw correlations and causation's to the conjectured outcomes.

Has anyone gone out
of the way to disprove the claim, or has only supportive evidence
been sought? 

 

This is the confirmation bias, or the tendency to seek confirmatory evidence and to reject or ignore dis–confirmatory evidence. The confirmation bias is powerful, pervasive and almost impossible to avoid.

It is why the methods that emphasize checking and rechecking, verification and replication, and especially attempts to falsify a claim, are critical.

When self-selected communities see external criticism as  harassment or you're simply not getting it, or  those people are just like talking to a box of rocks, the confirmation bias is in full force.

Does the preponderance of evidence point to the claimant's conclusion or to a different one?
 

Evidence is the basis of all confirmation processes. The problem is having evidence alone is necessary but not sufficient. The evidence must somehow be "predicted" by the process, fit the process model, or somehow participate in the process in a supportive manner.

Is the claimant employing the
accepted rules of reason and tools of research, or have
these been abandoned in favor of others that lead to the desired conclusion?
 

Unique and innovative ways of conducting research, process data, and "conjecturing" about the results are not statistically sound. In almost every discipline there are accepted mechanisms for conducting research. One of the first courses taken in graduate school is quantitative methods  for experiments. This course sets the ground rules for conducting research in the field.

Is the claimant providing an explanation for the observed phenomena or merely denying the existing explanation? 

This is a classic debate strategy—criticize your opponent and never affirm what you believe to avoid criticism.

Show us your data, is that starting point for engaging in a conversation about a speculative idea.

If the claimant proffers a new explanation, does it account for as many phenomena as the old explanation did? 

This concept is usually lost on "innovative" claims. The need to explain previous results is mandatory. Without this bridge to past results, a new suggested approach has no foundation for acceptance.

Do the claimant's personal beliefs and biases drive the conclusions, or vice versa?
 

All claimants hold social, political and ideological beliefs that could potentially slant their interpretations of the data, but how do those biases and beliefs affect their research in practice?

Usually during some peer-review system, such biases and beliefs are rooted out, or the paper or book is rejected.

In the absence of peer review - self publishing is popular these days - there is no external assessment of the ideas and therefore the author reinforces of the confirmation bias.

 So the next time you hear a suggestion that appears to violate a principles of  business, economics, or even physics, think of these questions. So let's move to the #NoEstimates suggestion that we can make decisions in the absence of estimate, that is we can make decisions about a future outcome in absence of estimating the cost to acheive that outcome and the impact of that outcome.

The core question is how can this conjecture be tested beyond the personal anecdotes of those proffering the notion that decisions can be made in the absence of estimates? Certainly those making the claim have no interest in performing that test. It's incumbant on those attempting to apply the notion to first test if for validity, applicability, and simple credibility. 

A final recommendation is Ken Schwaber's talk and slides to think about evidence based discussions around improving the business of software development. And the book he gave away at the end of the talk Hard Facts, Dangerous Half-Truths And Total Nonsense: Profiting From Evidence-Based Management

Related articles Falsifiability and Junk Science
Categories: Project Management

Think a Series of Sprints, Not Marathons

When you drive business change and digital initiatives with Cloud, Mobile, Social, and Big Data (and Internet of Things), successful businesses think a series of sprints, not marathons.

Successful businesses go digital by transforming their customer experiences, their employee experiences, and their back-office experiences through rapid prototyping, building proofs-of-concept, testing pilots, and going to production.  It’s a fast cycle of prototype –> pilot –> POC –> production.

These short cycles create rapid learning loops, build momentum, and help adapt for change.

In the book, Leading Digital: Turning Technology into Business Transformation, George Westerman, Didier Bonnet, and Andrew McAfee, share some of their lessons learned in driving digital initiatives and agile transformation.

The Digital World Moves Quickly

Avoid Big Up Front Design.  Whenever there is a big lag time between designing it, developing it, and using it, you’re introducing more risk.  You’re breaking feedback loops.  You’re falling into the pit of analysis paralysis.   Focus on “just enough design” so that you can test what works and what doesn’t, and respond accordingly.

Via Leading Digital:

“The digital world moves quickly.  The rapid pace of technology innovation today does not lend itself to multiyear planning and waterfall development methods common in the ERP era.  Markets change, new technologies become mainstream, an disruptive entrants begin courting your customers.  Your roadmap will need to be nimble enough to recognize these changes, adapt for them, and course-correct.”

Keep a Vision in Mind and Build on Success Along the Way

Hold on to the vision and use that to guide you as you test your ideas and implement them, without getting bogged down.

Via Leading Digital:

“To design an agile transformation, borrow an approach that has become common among today's leading software companies.  Keep people committed to the end goal, but pace your initiatives as short sprints of effort.  Create prototype solutions, and experiment with new technologies or approaches.  Evaluate the results, and incorporate the results into your evolving roadmap.  Adam Brotman, Starbucks CDO, explained the iterative process: 'We didn't have all the answers, but we started thinking about other things we could do ... I think it worked not to go too far, too fast, but to keep a vision in mind and keep building on success along the way.”

Test Ideas, Save Time, Adapt to Changes

Short cycle times help you respond to market change and adapt as you learn what works and what doesn’t.

Via Leading Digital:

“The test-and-learn approach will require some new ways of working in its own right, but it enjoys some distinct advantages.  By marketing ideas quickly before they go to scale, this approach saves time and money.  It's short cycle times also make it more adaptive to external changes.  Finally, it enables your transformation to sustain momentum through small, incremental successes, rather than the big-bang approach of long-term programs.”

When it comes to your digital strategy and driving business transformation, drive your business change the agile way.

You Might Also Like

10 Ways to Make Agile Design More Effective

Building Better Business Cases for Digital Initiatives

Cloud Changes the Game from Deployment to Adoption

How Digital is Changing Physical Experiences

McKinsey on Unleashing the Value of Big Data Analytics

Categories: Architecture, Programming

Software Development Linkopedia October 2014

From the Editor of Methods & Tools - Wed, 10/22/2014 - 15:04
Here is our monthly selection of interesting knowledge material on programming, software testing and project management.  This month you will find some interesting information and opinions about managing code duplication, product backlog prioritization, engineering management, organizational culture, mobile testing, code reuse and big data. Blog: Practical Guide to Code Clones (Part 1) Blog: Practical Guide to Code Clones (Part 2) Blog: Selecting Backlog Items By Cost of Delay Blog: WIP and Priorities – how to get fast and focused! Blog: 44 engineering management lessons Article: Do’s and Don’ts of Agile Retrospectives Article: Organizational Culture Considerations with Agile Article: ...

Comparing solutions

Coding the Architecture - Simon Brown - Wed, 10/22/2014 - 14:38

Here's a little snippet that my class really picked up on yesterday. During the training course, we get people into groups and ask them to design a solution based upon some simple requirements. The deliverable is "one or more diagrams to describe your solution". Aside from answering a few questions about the business domain and the environment, that's pretty much all the guidance that groups get.

As you can probably imagine, the resulting diagrams are all very different. Some are very high-level, others very low-level. Some show static structure, others show runtime and behavioural views. Some show technology, others don't. Without a consistent approach, these differing diagrams make it hard for people to understand the solutions being presented to them. But furthermore, the differing diagrams make it really hard to compare solutions too.

As I've said before, I don't actually teach people to draw pictures. What I do instead is to teach people how to think about, and therefore describe, their software using a simple set of abstractions. This is my C4 model. With these abstractions in place, groups then redraw their diagrams. Despite the notations still differing between the groups, the solutions are much easier to understand. The solutions are much easier to compare too, because of the consistency in the way they are being described. A common set of abstractions is much more important than a common notation.

Categories: Architecture

Agile Concepts: The Rationale for Longer Cadence

 

Space between has to big enough and no bigger.

Space between has to big enough and no bigger.

Cadence represents a predictable rhythm. The predictability of cadence is crucial for Agile teams to build trust. Trust is built between the team and the organization and  amongst the team members themselves by meeting commitments over and over and over. The power of cadence is important to the team’s well being, therefore the choice of cadence is often debated in new teams. Many young teams make the mistake of choosing a slower (longer) cadence for many reasons. The most often cited reasons I have found are:

  1. The team has not adopted or adapted their development process to the concept of delivering working functionality in a single sprint. When a team leverages a waterfall or remnants of a waterfall method, work passes from phase to phase at sprint boundaries. For example, it passes from coding to testing. Longer time boxes feel appropriate to the team so they can get analysis, design, coding, testing and implementation done before the next sprint. The problem is that they are trying to “agilefy” a non-Agile process, which rarely works well.
  2. New Agile teams tend to lack confidence in their capabilities. Capabilities that teams need to sort out include both learning the techniques of Agile and abilities of the team members. Teams convince themselves that a longer sprint will provide a bit of padding that will accommodate the learning process. The problem with adding padding is twofold. The first is that time tend to fill the available time (Parkinson’s Law) and secondly lengthening the sprint delays retrospectives. Retrospectives provide a team the platform needed to identify issues and make changes which leads to improved capabilities.
  3. Large stories that can’t be completed in a single sprint are often noted as reason to adopt longer duration sprints and slower cadence. This is generally a reflection of improper backlog grooming. More mature Agile teams typically adopt a rule of thumb help guide the breakdown of stories. Examples include maximum size limits (e.g. 8 story points, 7 quick function points) or duration limits (a story must be able to be completed in 3 days).
  4. Planning sessions take too long, eating into development time of the sprint. Similar to the large stories, overly long planning sessions are typically a reflection of either poor backlog grooming or trying to plan and commit to more than can be done within the sprint time box. Teams often change the length of a sprint rather than doing better grooming or taking less work. Often even when the sprint duration is expanded the problem of overly long planning sessions returns as more stories are taken and worsens as the team gets bored with planning.

Teams often think that they can solve process problems by lengthening the duration of their sprints, which slows their cadence. Typically a better solution is to make sure they are practicing Agile techniques rather than trying to “agilefy” a waterfall method or doing a better job grooming stories. A faster cadence is generally better if for no other reason than the team will get to review their approach faster by doing retrospectives!


Categories: Process Management

Welcome Firebase to the Google Cloud Platform Team

Google Code Blog - Tue, 10/21/2014 - 18:31
Today we extend a warm welcome to Firebase, who is joining the Google Cloud Platform team. Firebase makes it very easy for developers to build mobile and web apps that store and sync data in realtime.

Mobile is one of the fastest-growing categories of app development, but it’s also still too hard for most developers. With Firebase, developers are able to easily sync data across web and mobile apps without having to manage connections or write complex sync logic. Firebase makes it easy to build applications that work offline and has full-featured libraries for all major web and mobile platforms, including Android and iOS.

By combining Firebase with Google Cloud Platform, we’ll be able to build the best end-to-end platform for mobile application development. If you’re already a Firebase developer, you’ll start seeing improvements right away and if you’re a Google Cloud Platform customer, you’ll find it even easier to create great mobile and web apps. The entire Firebase team is joining Google and, under the leadership of Firebase co-founders James Tamplin and Andrew Lee, will be working hard to bring you great new features. Not only will the products you already love continue to get better, but you’ll also gain access to the full power of Google Cloud Platform.

At Google Cloud Platform Live on November 4, we’ll be demonstrating new Firebase features and integrations with Cloud Platform. You can join us there in person or you can register to stream online for free.

If you are looking for more info check out the Firebase blog. We can’t wait to see what applications you build!

Greg DeMichillie, Director of Product Management
Categories: Programming

2 minute models: A walk through the Business Objectives Model

Software Requirements Blog - Seilevel.com - Tue, 10/21/2014 - 17:00
Please join me for a quick walk through our Business Objectives Model. This video only scratches the surface of how valuable this model really is, and how it can be used for a variety of projects. Please feel free to make suggestions and ask questions in the comments section, and I will address them in […]
Categories: Requirements

Quote of the Day

Herding Cats - Glen Alleman - Tue, 10/21/2014 - 16:23

If a man will begin with certainties, he shall end in doubts, but if he will be content to begin with doubts, he shall end in certainties. - Francis Bacon

Everything in project work is uncertainty. Estimating is required to discover the extent of the range of these uncertainties, the impacts of the uncertainties on cost, schedule, and the performance of the products or services.

To suggest decisions can be made in the presence of these uncertainties without knowledge of the future outcomes and the cost of achieving these outcomes, or deciding between alternatives with future outcomes ignores completely the notion of uncertainty and microeconomics of decision making in the presence of these uncertainties.

Categories: Project Management