Situation: You need a simple PM app that integrates with Google Calendar
I thought this one was a little different. Most software is moving into the cloud, but the outer edge of that software spectrum contains the free email accounts that most of us have on Google.
Ganttic is a simple PM and resource management app that integrates with Google apps. It probably isn't appropriate for a large scale technology project, but for the many smaller efforts that most of us deal with, it could be great. At ProjectManagement.com, we run Goo
gle Mail (the business version) for work. Then each person has a personal GMail account. It's great to integrate the two calendars to get a high level view of what's going on in your life as a whole. Ganttic would bring small projects into that mix, giving you an even clearer view of the fact that you have way too much to do.
In any case, for Google Power users, it's worth a look -
Situation: It's a New Year and you're looking to get things done.
A big selling point of simple, inexpensive SaaS tools is that they give you the functionality you need & nothing else. Tool vendors combine that with an interface that's easier to use and we assume that most of the ease of use is coming from the fact that there isn't much to do inside of the app. There is no complexity to simplify.
In our actual work lives, there is plenty of complexity built right in - which why we need tools in the first place. There is so much complexity, that even with tools we have trouble focusing our efforts.
So the trend I've noticed recently is that tools are helping you focus on tasks or chunks of information one at a time. They effectively bring your attention to what matters now and obscure the rest. That last bit is where I see a difference in these new tools versus older ones - they completely clear away non-essential information. Yet that task or bit of information you are working on is kept in context in subtle ways - ensuring you still grasp the big picture.
For example, Workflowy is a neat little tool you could use to organize anything from a To-Do list to your life. There are lots of tools that help you create hierarchies, some of them very cool (I've always loved MindManager). The issue with these is that they do not force focus and clear away detail effectively enough. You can still get lost in a dense hierarchy of information. Workflowy limits what you see to one focus area. With Workflowy, higher levels of the structure are rolled up into breadcrumbs at the top, giving you context without detail. So you really only see what you should be working on now.
Prezi has been all the rage lately as a new, flexible way of presenting dense information. The idea is that people can absorb detail, but only in appropriate chunks. With Prezi, you essentially replace your PowerPoint Deck with a huge virtual sheet of paper that puts everything in to context like an infographic does. Then you zoom in on very specific parts of the presentation in a way that helps you tell a story. Again, dealing with complex information by breaking it into chunks - providing just enough context to have it all make sense.
What are you doing these days to help you focus? Are the any particular software tools that help? Please share -
Situation: You are a user of Daptiv's products and need a little help from time to time.
Q. Tell us a little bit about this new community you are starting. What are it's goals and what do you see as the immediate benefits to your user base? What will be the benefits for Power Users who are consistently active on the site?
The Daptiv community was conceptualized with the aim to foster product innovation and streamline the whole process of knowledge sharing through mutual collaboration. This new platform gives all its users an opportunity to collaborate directly with Daptiv peers and employees to receive timely responses to product related queries and leverage the community’s knowledge of best practices.
A highly user friendly and intuitive platform, it makes it easy for all the users to access the training courses and refer to knowledge based posts to advance expertise in Daptiv PPM whenever in need. The Greenhouse’ feature of the platform allows customers to share ideas and vote on new and innovative features for Daptiv’s product roadmap. It’s like an interactive knowledge house which is just a click away from its users. Active users and contributors will be abreast with the latest in technology, product and capabilities.
Q. How is this similar to or different from the MS Project 2010 Community or MPUG?
The Daptiv community includes a couple of unique capabilities. First, the platform is an evolved version of our Greenhouse community that was launched back in 2008 and enables customers to propose ideas and collaborate with our product team. Second, we include a library of Daptiv PPM applications and reports, which enable users to download best practice components and use them immediately in Daptiv PPM. This is addition to our blogs, videos, forums and knowledge base.
Q. Will Daptiv employee participants be pre-selected, or any anyone at the company participate?
Just like a community, this new platform is open to all of Daptiv’s users and customers. It’s an open development platform and a community of contributors. The community not only encourages exchange of knowledge, but also allows Daptiv to stay in closer contact with clients and partners.
Q. Do you see Daptiv partners playing a role in this?
Absolutely! Daptiv partners are an integral part of this community. Our partners tailor to an array of sectors and we value the know-how that they bring in. This is a collaborative community where ideas, experiences and best practices are exchanged for better results. We are eager to listen in and drive Daptiv’s innovation process by creating a closer, more intimate dialogue with our customers.
Q. How do you see members of the community working together?
We have designed the Daptiv Community to become the ultimate go-to resource to help customers with their PPM questions and deliver value for their businesses. We see this platform as a breeding ground for new ideas, seamless engagement and reliability.
Q. In terms of long-term vision, what do you see this evolving into over the next few years? Do you see the scope of this extending beyond Daptiv-specific best practices to more general PPM advice?
The basic premise of this community is to evolve constantly by adapting to the changing environment. Conversations are made in real world environment through open dialogue via discussion forums and an updated knowledge base.
Over the years, this community aims to serve as a one-stop entry point where both customers and employees pitch new ideas and initiate discussions with like-minded peers. Our vision is that this is the first place PPM practitioners from our user community come to ask questions, share best practices and connect with peers.
Situation: You think "the cloud" could be a solution for you, but you're still not sure...
Recently, we spoke with Scott Chapman, the President and CEO of Project Hosts, the leading provider of online Project Server, SharePoint and CRM solutions, with Microsoft competencies in Project and Portfolio Management and Hosting. Scott has been a market leader in cloud-based Microsoft applications since 1999 - so he really understands the ins and outs of "PM in the cloud". With the following interview questions, we wanted to give you a feel for what you should be considering when you are thinking about moving to the cloud. If you have more questions and happen to be in Phoenix in two weeks - Scott will also be speaking at the MS Project Conference 2012. His session is entitled, Going Online with PPM - What You Need to Know.
Q. When you talk about faster and less costly Project Server Deployments online, what's really driving the speed of implementation and cost savings? What are you giving up in return for those advantages when you compare it to a more traditional implementation?
A. Hosted deployments lead to faster implementations by avoiding three main roadblocks commonly encountered in onsite deployments: i) procurement delays in acquiring new hardware (or software), ii) delays to approve software for deployment (often requiring IT to have the training to support it), and iii) security delays in granting consultants the access they need to configure a solution. By avoiding these delays, online environments become operational more quickly, saving money through more effective and efficient project and portfolio management. Online environments also save money by allowing customers to only pay for what they need – starting small and scaling as needed.
The main things that customers give up in an online PPM environment are i) data being inside their corporate firewall, ii) integration to their onsite Exchange server, and iii) automatic corporate authentication. Some organizations will not consider online services in the first place, simple because their data would reside outside their firewall. But many organizations are OK with hosted project management information as long as the hosting provider has the correct security certifications (e.g. SAS70, ISO 27001, PCI). Although email alerts from Project Server and SharePoint work from online services, the Exchange integration that allows a user to update Project Server tasks from within Outlook does not work when connecting online PPM to onsite Exchange. Although it is possible to integrate online authentication with a customer’s corporate Active Directory, it does involve some customer IT work and is not automatic like in an onsite deployment.
Q. When do you have issues with security in the cloud? How do you ensure your PPM data in the cloud is secure?
A. Security is (and should be) the number one concern of most cloud customers. Many customers have their own checklists, assessments, or surveys that ask all the questions they need answered to ensure the safety of their data. Others use standard checklists like those for ISO 27001 or FISMA (Federal IT standards). It is important to make sure that a cloud service provider has had their security audited by a third party and to be able to see the results. Some customers will even perform their own audits, which may include penetration testing (ethical hacking) and other verification techniques.
Q. What do you give up in terms of customization with a cloud based solution vs one onsite? Can you give some examples?
A. If the cloud based solution is restricted to shared servers, then there are typically quite a few restrictions on customizations: One typically cannot add 3rd-party applications or custom webparts, and it may not even be possible to develop custom workflow or certain types of BI reports. If the cloud based solution involves dedicated servers, then the above-mentioned customizations are possible, but there may still be limitations with integrations to onsite resources where the integration requires everything to be in the same domain (e.g. Exchange integration to Project Server).
Q. What are the licensing challenges that you need to be aware of as you integrate your PPM cloud implementation with all of the other MS applications that are present in most organizations?
A. If a customer provides their own licenses for a cloud solution (to reduce hosting fees), the customer will need to remember to account for those licenses along with their onsite licenses when they do periodic licensing “true-ups” with Microsoft.
Situation: Testing your large-scale applications would take more infrastructure than you can afford.
Some of the most impressive global applications are customer facing retail systems that are underutilized most of the time, then must perform well during extreme bursts of activity that are nearly impossible to replicate in any testing environment. Often the flaws in these applications only show up in production when the price of failure is high. Recently we spoke with Steve Dykstra, Product Management Director for Micro Focus and asked him a few questions about testing in the cloud. His answers offer a pretty good overview of the subject and why it might be useful to look into.
Q. Most large scale applications go through boom and bust periods of performance, yet testing peak performance would require huge investments in infrastructure. How do organizations avoid purchasing large volumes of hardware to support the scale of tests that they need?
A. Increasingly, companies are looking to the “Cloud” as a way to improve IT performance. Imagine being able to focus the power of the world’s largest data centers onto your computing needs – just when you need it most. That’s what the Cloud provides. It is a virtual set of computing resources that can be securely accessed via the Internet.
And now through Cloud-based computing you can apply the power of the Cloud to peak load testing. Cloud-based performance testing lets software quality teams rapidly launch any size peak-load performance test without the burden of managing complex infrastructures. Now, you can test and diagnose Internet-facing applications under immense global peak loads.
In principle, your Cloud-based peak load testing solution should allow users to define the numbers of virtual users required for a given test, and schedule the computing resources necessary. They should be able to match the geography of these computing instances with plans. And they should be able to define the scripts that will be used to simulate user behavior.
These scripts are then accessed by Cloud-based test agents and run on a performance testing schedule. Managers would then be able to monitor tests and efficiently locate and diagnose where performance issues arise in their applications -- a critical step for any test manager.
Q. We have seen that some industries are particularly susceptible to problems with peak performance. Retail is a good example. Which retail-specific events lead to performance related issues?
A. Thousands or even millions of customers, partners and employees engage with your company every day via the applications that you develop and test. So, these systems must perform as expected and be accessible when needed.
But your applications are under massive stresses. Growing volumes of customers access your systems at peak times of the day, season, or at discrete events like following a major marketing promotion.
And it’s not just the volume of users that leads to higher demand on your applications; there is also increased application complexity. Today’s Web 2.0 applications, which are designed to be more responsive to users, can be highly resource intensive. This compounds the effects of increased demand.
When your application is subject to sudden spikes or sustained usage volumes, it can behave in unexpected ways. The system may crash or become too slow to use. It could also become inaccessible as customers compete for access. Regardless, it leads to frustrated users and potentially lost business. Every moment that the application is inaccessible is potentially millions of dollars in lost revenue. Sudden spikes could be due to a holiday season or a sale. The business of retail is built on these kinds of spikes, so your applications should be too.
Q. Web 2.0 and social applications are often used to enhance shopping experiences. Will that affect how retail organizations should handle performance testing?
A. Today’s applications are often developed as dynamic and highly interactive Web 2.0 applications. These Rich Internet Applications require special testing in order to access functionality developed in AJAX, Silverlight, and Flex, among other technologies. Simple application testing is no longer sufficient because the background interactions and richness of the client-side are so great. This is well-suited to Cloud-based testing as it requires more computing power to perform these more complex tests. However, very few solutions are sufficiently mature to handle the dynamism of this style of application. When looking at a solution, make sure to investigate whether Web 2.0 support is needed and the degree of coverage you have.
While many applications today are entirely browser-based, that is not often the case for large enterprise applications. For instance, a bank’s core banking application will likely have a hybrid model. Part of the application is browser-based Internet banking and another part may be only accessible via bank tellers’ terminals.
This means that you need to test both routes to your system for completeness. An approach that relies solely on Cloud-based testing will be lacking since it needs to access the application via Internet protocols. As a result, it is important to determine upfront whether you need a mixed model that combines Internet protocols with support for .NET, Java, Oracle, SAP, Siebel, COM, and other enterprise application protocols.
Q. Most retail applications on the web are global. Do testing needs change with a distributed user base?
A. First off, not all Clouds are equal. Some can scale more readily than others. So, you should consider the levels that you need. Whether you need computing power for tests with 50,000 to 100,000 to 200,000 virtual users and beyond. Of course not all Cloud based load testing providers can scale to this degree and not all testing solutions can effectively harness these kinds of resources, so care should be taken when selecting vendors.
Further, you should be able to simply schedule time for a test and resources are automatically provisioned. This avoids testing bottlenecks and prevents long delays as internally managed hardware is acquired and set-up.
Also, the global nature of some Cloud-based solutions lets you “place” Virtual Users in a variety of locations to test international performance. No longer do you need to maintain hardware in a variety of countries in order to test. Not all Cloud-providers or test solutions can provide this capability, so it is key to evaluate if global-readiness is a requirement for you.
Discovering that your application failed under peak loads is clearly not enough. You want to discover why it failed and how to correct it. While this may seem obvious, diagnostic tools are often excluded from testing solutions. This is often the case when Cloud-based peak load tests are operating against an application you manage in-house.
Cloud-based tooling on its own cannot analyze the internal behavior of the application under test. This can mean that applications may be incompletely repaired following tests, increasing the risk of a real-world failure. It is more effective to combine the power of Cloud-based testing with “on the ground” diagnostics of your application performance.