I advocate for metrics tied to actual business value, like KPIs, but I sometimes see these miss the mark.
For example, I worked with an eCommerce software company and was tasked with meeting an SEO target of increasing the sites ranking to increase revenue. This was in early 2016 when Google and other search engines weren’t crawling single-page applications properly. So the site was not appearing high up, or at all, in searches for our product catalogue.
On the surface, this is a very reasonable target. We spent several weeks investigating rewriting parts of the single-page application to pre-render server-side to allow the search engines to crawl the data. This was going to be a radical rewrite which would take many months involving most of the engineering teams.
However, during this process, we started to look at the data we had around our user-profiles and the usage patterns on the site. The site sold high-end luxury fashion clothes and jewellery. The conversion rate was quite low but the average basket value was very high (> £30k).
By examining this data and publishing a survey attached to the registration and checkout processes we determined that almost all of these high value clients were not even searching for our site through search engines. The traffic to the site was driven evenly from fashion magazine advertising and social media posts.
Due to this revelation, we switched our efforts from SEO to streamlining our registration process, checkout process, and started to sponsor social media influencers. This increased conversion rate (albeit lowering the average basket value) and more efficiently pushed high-value clients to the site.
As a happy side effect less than four months after abandoning the attempts at increasing site ranking, Google’s crawler was updated to crawl single-page applications and our page ranking increased without us having to expend more time on it.
Sometimes, it’s worth challenging the hypothesis behind a target as not all KPIs are created equal.
If you want to discuss how to use hypothesis driven development please contact me through the comments, email or LinkedIn.
Photo by Lukas
]]>I have had several clients who have asked me to carry out software architecture reviews to determine how to replace bespoke software developed in-house with a commercial off-the-shelf (COTS) solution.
What I’ve found with most of these clients has been that the COTS solution (even if customisable) only covers about 70-80% of the functionality of the bespoke solution that they wish to replace. It also covers a load of other areas and features that the client has no use for. In addition, they’re often not aware of the operational expenditure that the COTS solution will take and haven’t factored in the cost of customisations and maintenance so they’re not comparing like with like. They also tend to underestimate the cost and the timescale of the data migration work required to move to the replacement system (usually by a factor of 2.5).
Even when all this is taken into account a small cost difference in favour of the bespoke solution alone would not necessarily prevent me from recommending moving to an off-the-shelf solution as it can pay off in reduced opportunity cost due to freeing internal engineering capacity.
However, the biggest factor around how to progress is usually what makes your business unique. This is usually your value proposition and the reason you get the market share you do. This value proposition is often built into your bespoke solution but not something easily replicated in the off-the-shelf solution (as it’s unique!).
This is usually what leads me to recommend either not moving to the off-the-shelf solution or even taking a hybrid approach of replacing all but the unique value proposition and either keeping the bespoke solution or building a replacement for the value proposition only.
All of this can be summed up as don’t give away the Crown Jewels of your business to a vendor who can’t do it as well and can utilise your data to determine how to replicate it for your competitors.
Conversely, if the bespoke solution does not make your business unique then consider a commercial replacement (depending on the costs).
If you want to discuss how to determine how to migrate to new solutions please contact me through the comments, email or LinkedIn.
Photo by Tembela Bohle
]]>You may think that AI is the solution but are you even making the best use of the information you already have?
Last week I was having a chat with my friend and entrepreneur Mike in a coffee shop the other day and the subject turned to AI.
During the discussion, we came to the conclusion that most businesses considering using AI don’t understand the data they already have or could have. Often these jewels of information are in disparate locations, difficult to use formats or not even captured.
As we both enjoy a good simile we analogised this data to reservoirs whose locations are not even known let alone used to power electric turbines.
Often these businesses jump to a solution (data warehouse, data lake, data lakehouse..) without even understanding what the data is, how it might be used and how much it will cost to get it fit for use, let alone the opportunity costs of not utilising it.
It sounds counter-intuitive but I tend to build data strategies, tactically, one question at a time, incrementing to build towards a coherent approach, learning as I progress.
I’ve worked with dozens of clients to:
Often this process can identify data you already have that’s not readily usable because:
In a lot of cases, the missing technology that’s preventing you from getting value from your data is data engineering and not AI.
If you want to discuss how to release the potential of your data please contact me through the comments, email or LinkedIn.
Photo by Pixabay
]]>I find it incongruous that in the software development industry we often talk about languages, frameworks, libraries and architectural patterns but we don’t often consider them as construction materials. We do consider some ‘structural properties’ like performance, resilience, security but we often make a decision based on these properties and apply it to our entire system.
If we were fabricating something physical (non-digital) would we choose one material for our entire project? Would we ignore other properties like malleability, texture, density? Would we require the same structural rigidity everywhere or would we pick materials that had more ‘give’ in places that require change and add rigidity in places under extreme stress?
We often think about programming languages and libraries as tools but they are part of our product. The choices we make affect the amount of change we can accommodate, the ‘texture’ of those languages, frameworks and libraries have a visceral impact on how our engineers work with them, to the point that they can make engineers leave to work on something more fun.
This is often the underlying reason why we see large tech companies that start with one language and then re-write parts (or all) of their system in other languages later, usually once the need for the product has been established, the problem areas identified and the stresses caused by changes, performance constraints, load, etc. measured and understood.
This is also another reason modularization patterns, such as micro services, have gained popularity. But these patterns in themselves are material choices. When we are proving our concept viable (startup mode) should we build lots of separate components in concrete, stainless steel and glass, then fit them together with different glues, screws and fitments (often bespoke) or should we shape something quickly out of polystyrene or wood and test it first?
We should think about computer science as a material science. We should consider the intrinsic properties of our building materials and consider what is suitable for our current environment and how we might evolve to different materials as our environment changes.
If you want to discuss how the ‘material science’ of software development has an impact on your software and your development teams please contact me through the comments, email or LinkedIn to discuss in more detail.
Photo by Laura Tancredi
]]>As a start-up founder you probably lie awake worrying about “How will my technology cope with another 10 or even 100 thousand users overnight?”. Firstly, if you genuinely have this concern, congratulations it’s a great problem to have! Most start-ups never get to the stage of having to worry about exponential growth.
Secondly, your major concern at this stage shouldn’t be rapidly adding shiny new features but things not breaking catastrophically so you retain customers and don’t suffer irrevocable reputational damage.
Should you build your software as a number of microservices, should you use serverless database, should you use AWS Lambdas/Azure Functions/Google Cloud Functions, do we need an API Gateway…? Too many technologies, too many buzzwords, and too many choices.
In fact, you’re probably worrying about the wrong choices.
If you’ve read my other writings you’ll recognise a theme, I like to think of software as existing within layers of ecosystems.
Scaling your technology successfully relies on the ecosystems that it’s developed within, which depend, incongruously, on constraints.
If you pick the right constraints for your technology team(s) you provide the right guard rails and support for your team(s) which in turn will provide the right technology and architecture to respond rapidly to change, like rapid user growth.
If you want to hear more about my thoughts on how to scale or software ecosystems let me know in the comments.
]]>Neither!
Well that was quick… on to the next subject!
Seriously though when I hear a startup/scaleup asking how to hire many Engineers quickly it’s usually symptomatic of a business whose current Engineers are swamped or a business that’s trying to tackle a founder/investor wish list of many targets at once.
The first question I’d ask is why do you think you need 10 more engineers or an engineer superstar?
Often the answer is either;
Or even both.
So if you have either or both these issues your problem is your team is context switching (what I less politely call “thrashing”). Context switching is highly inefficient so you’re not using your current engineering capacity effectively.
In general, doing something productive in software engineering takes concentrated focus and unbroken time for at least 2-3 hours, or about half a day. This is different from a manager who can gather information or make a decision in a 30 minute meeting. Engineers work on a different schedule to managers.
You don’t need to hire more engineers unless you’re hiring a consultant or senior engineer to coach your existing team to be more effective.
How can you stop context switching?
I have a lot more detail on these subjects and if these are issues you struggle with let me know and I’ll write more.
You never want to hire a so-called x10 Engineer, or at least the usual definition of a x10 Engineer, i.e someone who churns out loads of code and fast, is usually disruptive to a team and often produces code that’s hard to grok and harder to change. What you want is an engineer who multiplies your teams effectiveness by 5! I.e. not necessarily the best engineer technically but someone who identifies opportunities to communicate better and promotes quality and changability as first class concepts.
]]>Ever since I took up Clojure as my preferred programming language in 2013 I’ve been asked or seen in the Clojure Community discussions around “Why is Clojure not widely adopted in the software development industry”?
If you want some perspective on the arguments in this debate here is a small selection of the discussions:
I’ve engaged in some of these conversations over the years and I’ve put forward similar opinions as many of the respondents in the discussions above.
The arguments over why Clojure is not more ‘popular’ range from the banal to the ridiculous:
I’ve left off this list the most ridiculous argument because it’s less of an argument and more of a self-fulfilling prophecy “We can’t hire 100 Clojure developers”.
This last one is simultaneously re-enforcing a reason not to use Clojure by not providing demand for developers to learn or migrate to using Clojure but is also an example of misunderstanding Clojure’s benefits of being able to do more with less developers by assuming you even need 100 developers. Unless you’re Nubank sized you probably don’t!
However, although annoyingly silly this argument about a smaller pool of developer talent is probably the most destructive argument against increasing Clojure adoption as it’s one non-technical managers without a detailed understanding of the ‘economics’ of the constraints and benefits of different programming languages latch on to.
You can read in the many discussions online (some I mentioned above) the somewhat subjective analysis of the most common arguments, however in thinking about this recently I thought I’d attack the problem from the opposite angle.
In order to get to the root of ‘Why’ people and organisations find Clojure interesting enough to adopt we need to look at who adopts Clojure.
I will use a point of personal privilege by ‘speaking from authority’ by citing my personal experience of working with more than 7 organisations that decided to adopt Clojure, working on the programme committee of a Clojure conference for 4 years and spending over 7 years talking to Clojurians online.
My experience is that most people who were in the position to be decision-makers in adopting Clojure in an organisation are:
Obviously my experience is not a scientific survey but these characteristics seem to be prevalent in the decision-makers who adopt Clojure.
For this question we have some slightly more objective data. The State of Clojure 2022 survey:
As we can see the majority of organisations using Clojure are < 100 people in size.
Perhaps more telling is the number of people in the organisation using Clojure. It tends to be either organisations with small engineering functions or just a few teams in the organisation.
What kind of applications is Clojure being put to?
As we can see most people are currently using Clojure for web development, building and delivering commercial services and enterprise applications. We can fairly assume that the Open Source development is supporting the building of the rest of the applications.
What about the kinds of people using Clojure? What does that tell us?
We can also see that most people moving to Clojure come from a programming language typically used to solve practical business problems. These languages are also typically used in web and mobile-based applications rather than more scientific or engineering problems. These languages are also primarily object-oriented in focus although several can be written in a more functional programming style.
Judging from the number of people who skipped this question (17 of 2373 = 0.7%) we can assume that very few people are new to programming. Almost none it seems.
Lastly, how long have people adopting Clojure been programming professionally?
and how long have they used Clojure for?
We can see from this that most people using Clojure have been programming for more than 5 years and these two results seem to reinforce the assumption that almost all have come from programming in another language first. Also most people adopting Clojure seem to stick with it (although that’s hard to determine solely from the people who’ve stopped using it as this is only the respondents to the Clojure survey who would be self selecting to be mainly Clojure users, the high number of people using Clojure for > 5 years is indicative of high retention).
So, what does this tell us about ‘Who is adopting Clojure’?
So, back to the earlier question, ‘Why do people adopt Clojure?’.
The answer I will give is obviously subjective to some extent but rooted in my personal experience in working with over 7 different organisations on over 35 different services or applications.
I believe that people adopt Clojure because they see in it’s philosophy and constraints a language that tends to naturally support:
Again this is my subjective opinion, but I think the answer to that lies in the profile of who recognises the need to adopt the language. I think that you have to have suffered a number of failures and frustrations with other programming languages to recognise that the benefits far out weigh the alien syntax of a Lisp which has a functional programming paradigm and immutable data by default.
Photograph attributed to ~bostonbill~ under creative commons license.
]]>Back in 2018 I wrote a, somewhat, critical essay on Clojure and how a lack of developer discipline can easily build up in a Clojure codebase to make it hard to grok and hard to change.
I just wanted to revisit this as some time has passed and in the meantime I’ve worked on three more Clojure codebases as well as a Java one.
Although I still feel all the points that I made in the original post are still valid my recent experience has tempered my opinions.
My main critiques could be summarised as:
Having worked on three more Clojure codebases and another Java codebase I’d like to readdress my opinion on the cognitive load Clojure tends to introduce compared with that introduced by statically typed languages (or at least those not derived from the ML tradition of Hindley–Milner type systems).
One of the things I noticed when working on these codebases was that the higher the rate of change the easier the cognitive load imposed by Clojure was compared with Java.
Initially I thought this was counter intuitive as I’d expected that the strict types of Java would reduce the amount of mental juggling I had to do to construct a model that needed to change. Although having types (classes) in Java helped me superficially to determine the shape of data in my mental model that initial at-a-glance definition wasn’t that much of a time saver.
Although Java classes show the data encapsulated in a concept;
Treating data as a generic data structure (a map [dictionary] or a vector [array]) means within a given context (i.e. within a call chain or a single function) you only have to change/add/remove a field and that only impacts that field and the access path of that field.
With good naming and use of something like Spec to describe the data at the boundaries of the system or components of the system generic data structures may add a few seconds extra to building a mental model but this is easily compensated by minutes or hours of time saved in changing types (classes/interfaces) or introducing new types (classes/interfaces).
Therefore, I think that Clojure’s ‘disadvantages’ as a dynamically typed language that doesn’t enforce a data definition burden on the developer up front are more than out weighed by the advantages of making changes faster and in a more isolated manner.
]]>I’m old.
No, really, relative to most software developers I’m really old!
I’ve been a professional software developer for over 30 years and I’ve designed software systems for over 29 years.
This means I’ve seen software trends come and go, and then return again in a different guise…. multiple times. During that time I’ve had to forget many tools, techniques and languages and learn new ones. Recently this has lead me to thinking about what knowledge has endured over that time.
I can only think of a few subjects that I studied at degree level or below that are still highly relevant today. Setting aside basic mathematics and underlying computer science concepts, like logic, there is one field that I think not only holds relevant but has become even more relevant - Systems Theory.
People have literally written whole books on this so I can’t explain this in any depth in a blog but here’s a definition:
Systems Theory is the interdisciplinary study of systems, where a System is:
A set of elements or parts that is coherently organised and inter-connected in a pattern or structure that produces a characteristic set of behaviours, often classified as it’s “function” or “purpose”.
Obviously in software development we often think about systems. However, we often lose sight of a number of truisms.
Systems are dynamic. They consist of stocks (‘reservoirs’ of information), flows (of information) and feedback loops (with delays).
If we are trying to solve a problem using software one of the first things we need to do is draw a boundary around what it is that we are trying to solve: What is inside our problem and what is external?
In Domain-Driven Design this boundary is called ‘bounded context’.
Defining a bounded context is both critical to success and really hard. I’ve never worked on a software system where we got this context ‘right’ first time with the exception of a system re-write two years after the first system was written. Even if, as in that case, the bounded context reasonably contains the problem, the world moves on and the problem space changes.
This is the fundamental driver for ‘agile’ methods. The acceptance that things will change and our software needs to adapt to this change. As a result, we accept that we, the development team, are an integral part of the system over time. Our bounded context is an imperfect model of the problem our software is representing. The runtime environment, the development team, the organisation we work in and the market that organisation is operating in all have an effect on that model and are potentially affected by the models dynamics.
So if we accept that the team, department, organisation and the market we operate in all influence our software how do we make changes to that larger system if we want to transform it? This is often the question organisations ask when starting a transformative journey.
Examples of these transformative journeys are often expressed in trite marketing phrases like; ‘Agile Transformation’, ‘Digital First Strategy’, ‘Customer-centric approach’. However, I would add ‘Evolutionary Software Architecture’ to that list.
So what are the levers we can use to affect change in whole systems?
In increasing order of effectiveness:
So some of the levers in that list may appear to be beyond the scope of a development team or a software architect but it’s important to understand where pressure to resist change comes from and where we can use these same levers to affect change.
The order of effectiveness of the levers shown above is somewhat fluid. Depending on the context you may find some movement in the order but it’s generally limited to the swapping of the order of two ‘levers’ next to each other.
If we assume the ascending order of effectiveness shown we can use this to understand whether we are likely to be effective in driving change using one of these levers.
Assume that we are trying to change an organisation to react more quickly to change from the market it operates in, what is typically referred to as an ‘agile’ transformation.
Adding visibility on ‘stories’ delivered by a single development team through some kind of information radiator like a ‘sprint’ or ‘kanban’ board (information flows) is not likely to make a drastic change in the organisations responsiveness. We haven’t incentivized team members to collaborate (rules) and given the team the authority over their own tools & work (self organisation) and therefore the current hierarchy of communication and management structures will apply pressure to resist responsiveness to change.
In fact, the reason most ‘agile’ transformations fail, or at least only partially succeed, is that the lever you actually need to ‘pull’ to make the whole organisation more responsive is at least at the level of ‘goals’ for the entire organisation but more often that of a complete ‘paradigm’ shift for the entire organisation.
Note that not only is the lever further down the list but that the scope of the focus of the lever is much broader. Introducing agile information radiators to development teams is not going to address the way the work flows to the teams from other parts of the organisation, how the work is delivered or measured.
True agile transformations are a complete mind shift in the whole organisation, driving fundamental restructuring of all the systematic structures in the organisation.
As, by definition, evolutionary architecture is the concept of software design and implementation being responsive to change it is predicated on the goal of the organisation responding rapidly and effectively to change.
Evolutionary architecture is the art of self organisation of both the human subsystems that produce software and the automated systems that form the environment and materials for that software.
Therefore to be successful in producing architecture capable of evolution, which is predominantly at the level of ‘self organisation’, we need to already be in an organisation that embraces a paradigm of change.
For a more detailed discussion of the various potential tools and techniques of evolutionary architecture I suggest reading my blog post on Prerequisites for evolutionary architectures and/or read Building evolutionary architectures by Neal Ford, Rebecca Parsons & Patrick Kua.
]]>Designing software that is flexible and changeable is arguably the most important architectural property. I often get other software architects saying “What about performance?” or “What about security?” I’m not saying these other properties are not important to consider early on. They are. However, if we optimise our architecture for change (evolvability), when we discover a performance issue or a security vulnerability we can change our system to help address it. The ability to respond quickly to issues like these is exactly what makes evolutionary architecture so essential.
You can think of the way a species adapts to its environment in the same way that you think of evolutionary architecture. To be successful, animals need to produce new generations with advantageous traits, respond to feedback from the environment, and leave room for failure by falling back on what works.
Software is similar. You need to make sure it’s adaptable and that you’re making changes to your system based on what works. There are a few key ways that we can create these adaptable architectures: Pick constraints to support rapid change
In order to support evolution in software we need to be aware of the constraints of the software and the environment that the software operates in.
As software architects and developers we have control over some aspects of the environment we build and run software in. Here are some of the constraints we might want to consider to support change/evolution.
At the start of my career, I believed that any Turing complete programming language was equivalent to any other and the language picked was not that important. As I’ve become exposed to more programming languages, paradigms, libraries, and frameworks I’ve realised that the ‘building materials’ we pick have a huge impact on the inherent properties of our software systems, especially on changeability.
When you start building a new system, consider the following:
In order to evolve, our software needs to be easy and quick to release, and we need feedback about it’s appropriateness during development and while in production. Therefore we should pick tooling and approaches that support those properties. Here’s a non-exhaustive list of some things to consider:
Though it may sound frightening, it can be useful to incorporate production-testing alongside these other testing methodologies. Sometimes testing less and allowing something to alert if it fails can be a risk worth taking or even be advantageous in detecting the actual problem in production. Production is the only real test environment. However, this is a risk judgement dependent on the problem, architecture etc.
Implementing some or all of these approaches can enable you to respond to a bug by fixing forward fast rather than taking a more defensive approach of testing excessively and reverting if a bug occurs.
For every additional person involved in writing a system, you exponentially increase the communication paths on your team. If you can keep your teams and the number of teams as small as practical you reduce the amount of communication and coordination required to implement each change.
I would extend this principle to keeping the number of teams as small as possible too. Working with a constraint of a limited number of (the right) people will result in innovative approaches to solutions. Just make sure that one of them is not to work longer hours (therefore consider an upper limit on working hours as another constraint!).
Even the most ‘brick and mortar’ businesses do a lot if not the majority of their customer interactions via software (even if it’s B2B) and therefore your organisation should think of software as the primary means of revenue generation.
Forcing ‘project thinking’ onto software development is a bad idea. Trying to implement a number of ‘features’ to a deadline and budget is often necessary but if every change to your software happens this way then that short term focus never leads to longer term consideration of the product or platform and it’s quality properties.
You can use projects to manage budgets but always think about the product or platform when choosing what to implement.
In order to evolve, our software has to generate a new ‘mutated’ generation. If you can deploy your changes in a canary deployment or even, depending on the change, a dark deployment you can test the change in the only realistic test environment, production.
Without mandating a specific architecture (e.g. microservices, event streaming, modular monolith) Domain Driven Development (DDD) and Event Storming are very useful in determining the boundaries of deployment units.
Don’t consider static modelling in isolation If you build a data, component or class model in isolation you often focus on the wrong attributes. For example, you can model hundreds of attributes about a student in a university but the bursary system probably only cares about identification, fees and payment information plus events that change the state of those attributes whereas student services care about much more personal information. Use the dynamic aspects of the system to guide what information is important to that context.
Thinking about events and flows often leads to discovering the components (deployment units) of the system. Each high level process that sends and/or receives a message is a potential component.
Here is a list of techniques that can be useful for separating deployment from release:
I like to think about software as ‘living’ inside increasing larger ecosystems in the same way that biological organisms do.
Illustration 1 shows the layers of ecosystems that our software ‘lives’ in. We can see from this illustration that the inner ecosystem can be affected by a change in one of the outer ecosystems but, conversely, the inner ecosystem can cause a change in the outer ecosystems.
Additionally not shown in this diagram is the concept of the frequency of the feedback.
Without going into an exhaustive list of metrics and techniques that might be used to provide feedback the following illustrations give you some ideas of what you might want to consider, but as always, there’s no silver bullet and YMMV.
The micro-ecosystem translates to the runtime environment and the software development practices used in developing the software. The illustration above gives some metrics and techniques that can provide feedback at that level.
The biological concept of a Biotope (or habitat) translates to the team and/or product that the software is a part of and Illustration 3 gives some examples of metrics and techniques for feedback at this level.
Illustration 4 shows some examples of metrics and techniques to provide feedback at the organisation or departmental level.
Finally, Illustration 5 gives examples of potential feedback mechanisms at the level of the target market or in other markets.
As you can see the example metrics I’ve suggested are a mix of measurements of the processes for producing/running software and the measurement of external factors that may impact or be impacted by that software. It’s important not to concentrate on only measuring the things you can change directly but also measure the factors that you only have indirect influence over to enable your software to evolve to those pressures too.
Although I’ve given a number of metrics you should start by identifying between 2 and 5 metrics in each ecosystem level. I also try to map lower-level metrics to metrics in the ecosystem above in order to ensure that a metric is driving the desired behaviour.
Lastly, none of these techniques can be implemented effectively without a culture that embraces, seeks out, and thrives on change. The typical characteristics of this type of culture are all the things you see in agile and lean books/courses:
However, I think it’s important to emphasise this culture is something that needs to be pervasive throughout the whole organisation. It’s all very well having a software development team or department that has this culture but if the rest of the organisation interfacing with that group is in a strictly hierarchical command and control culture this will cause friction and ultimately be much less effective in responding to change. So what do you do if the organisation is not on board with this culture?
Ideally you can convince the ‘C’ level management and the board that this culture is required and demonstrate how to achieve it through some ‘localised’ success by adopting some of the strategies suggested.
However, I have found some success in tying the metrics that the ‘C’ level management look at, which tend to be either in the organisational/department environment or world wide/market environment, to the metrics in the ecosystems below to show how ‘moving the needle’ in one metric impacts the others. I’ve also found that this helps start the conversation about software not being a ‘cost’ to the organisation but it’s primary means of revenue generation in the future for most organisations. Demonstrating that software is not only about automating processes but about creating new ways to interact with customers in new markets. This in turn can provoke a move away from project focused development towards product/platform focused development.
If you’re trying to convince the upper management/board of something, demonstrating how it impacts the metrics they care about is hugely powerful.
I’ve covered a lot of ground in this post. However, if there are only three things you take away from this I hope they would be:
There’s no one-size-fits-all solution when it comes to evolutionary architecture. Instead, it’s important to gather feedback over an appropriate timescale, adjust your approaches as you learn and grow, and don’t try to change everything all at once. I hope this post has given you some food for thought and a few practical approaches to try.
]]>