Sessions: 14th June 2017

Julian Harty

★ KEYNOTE 1 : Does software testing need to be this way?

Track 1 | 09:45 - 10:35

Software development teams recognise testing is relevant and important. Testers want to add value and do purposeful and meaningful work; however, software automation is encroaching and in some cases obviating much of the hand-crafted tests - including some of the 'automated tests' created by teams. As Nicholas Carr says in his book ‘The Glass Cage’: "Who needs humans anyway?"

And yet, humans - people - have much to contribute to crafting excellent software, including testing the software.

In this keynote talk, Julian investigates: leading automation techniques to understand more of what they can offer us in terms of testing our software; how structured testing techniques can help all testers, including "exploratory testers" where analytics can help tools, approaches and techniques to help test more effectively.

Julian will also show:

• How to set the direction of what we want to achieve: our choice affects the rest of our decisions.
• Man vs machine: what automation is already able to do. How, perhaps paradoxically, automation can limit what we can do and may reduce our competence - can we find ways to use automation that doesn't reduce our abilities?
• The powerful combination of data mining, common factors and test automation to help find commonplace bugs.
• The effectiveness of exploratory testing.
• Guiding testing using data & analytics.
• Roles for humans: people can add discernment.
• Learning from medicine.
• Next steps in testing.

Having attended this keynote, you’ll gain a better understanding of the power and potential of software automation and how it eats into the current 'value' of testing performed by humans as well as ways to harness software automation and to change our practices so our work continues to add significant value.

Don’t miss this keynote talk if you are involved in designing, developing, testing and supporting software.

Julian Harty

Julian Harty is a Software Engineering and Tech Ed who help others to work more effectively where they are fulfilled in their work and enjoy what they do.

His specialties are:

• Software Testing including testing by humans & "automated tests", design of Software & User Experience that includes wide ranges of users, including people with disabilities & impairments
• Presenting, sharing & mentoring people
• Mobile Apps; with a particular focus on engineering aspects and testing & test automation. This work encompasses various platforms e.g. Android, iOS & mobile web.

Julian speaks, presents, facilitate and teaches at conferences and workshops globally and has have given keynotes in multiple countries over the years. Another role he he enjoys is coaching and a mentoring, where he’s equally happy to work with senior vice presidents as junior engineers.

Gerie Owen

Agile Teams: When Collaboration becomes Groupthink

Track 1 | 11:05 - 11:50

Does your agile team overestimate its velocity and capacity? Is the team consistently in agreement with little discussion during daily stand-ups, iteration planning or review meetings? Is silence perceived as acceptance? If so, collaboration may have become groupthink.

Some aspects of the agile team that are meant to foster collaboration including self-organization and physical insulation may also set the stage for groupthink. Groupthink is the tendency of groups to minimize conflict and reach consensus without fully analysing all aspects of their ideas. One way to mitigate groupthink is by using CDE. Container Difference and Exchange are factors that influence how a team self-organises, thinks and acts as a group.

In this talk, Gerie will apply these theories to show agile teams how to manage their inevitable conflicts. She’ll review the factors leading to groupthink, show how to recognise the symptoms and develop some ways of preventing it. Then using CDE theory, she’ll show what managers can do to positively influence the agile team without interfering in its self-direction.

This talk aims to teach you how to apply these concepts to your own teams and use them to become high performing teams.

Gerie Owen

Gerie Owen is a Test Architect who specialises in developing and managing test teams. She has implemented various Quality Assurance methodology models, and has developed, trained and mentored new teams from their inception.

Gerie manages large, complex projects involving multiple applications, coordinates test teams across multiple time zones and delivers high quality projects on time and within budget. In her everyday working life, Gerie brings a cohesive team approach to testing. She has also presented at several conferences and authored articles on testing and quality assurance.

Antonio Robres

One Testing tool to rule them all

Track 2 | 11:05 - 11:50

One of the main problems with test automation and test case definition is the wide range of different tools used for various purposes and multiple testing activities. Thus, using several tools no only requires skilling-up with all these tools, but also pushes training costs up for the whole.

Another disadvantage of using a variety of different testing tools is the challenge in integrating them together to automate the whole process removing the need for manual intervention.

To avoid all these problems, we require a single tool that tests at all levels, a tool that includes test case definitions and does not require integration with other tools. Will such a tool ever exist? Well it already does and is being used by thousands of developers around the world. Python!

Python provides several libraries to support testing at all levels, from unit tests to UI testing through component testing or performance testing. Using Python can; increase the speed of test automation; improve the maintenance of tests by reusing code and utilities among the different test levels; increase the communication within the development team.

In this talk, Antonio will show you how to test different levels (unit testing, APIs testing, frontend testing and performance testing) using only python libraries. He will also show you examples of how to reuse code across different types of tests to reduce the degree of maintenance your test automation requires.

Antonio Robres

Robres is QA Manager at Telefonica R+D in Barcelona. He studied Telecommunications Science at the Polytechnical University of Catalonia and has a Master’s Degree in Telecommunication Management. He has been working for 9 years in the field of software testing and quality engineering for different companies such as Telefonica, Gas Natural and Grifols. His work focuses on the design and automation of several testing projects, mainly in the field of web services. He is also a regular speaker in international conferences.

Albert Tort

A Case Study of an Enhanced DevOps Ecosystem for Development, QA and Operations acceleration

Track 3 | 11:05 - 11:50

DevOps approaches and agile methods pose new challenges in the setting up efficient ecosystems for Development, Quality Assurance and Operations and accelerating through automation, while iteratively evolving a valuable product through communication and collaboration.

In this talk, Albert will present a real case study in which he set up an operative cross-wise DevOps ecosystem. The ecosystem was created as a set of coordinated service components from different sources that interacted among themselves to establish a pragmatic approach to applying DevOps in practice.

He’ll present both the approach and concrete case study, showing the development of a Quality Assurance (QA) acceleration tool aimed at automatically generating test cases from user stories.

Albert will also explain how the tool had been progressively incorporated into a development ecosystem for improving QA activities (as a minimum valuable product was developed), and how it has proven to be a reusable innovative acceleration solution that now forms part of an overall general approach.

Finally, he’ll also cover other necessary aspects in transforming this working ecosystem to include: organised culture, business-technology alignment, explicit working processes, big picture analysis capabilities and iteration-by-iteration feedback.

Albert Tort

Albert Tort is a Software Control & Testing specialist in Sogeti Spain. Previously, he served as a researcher at the Services and Information Systems Engineering Department of the Universitat Politècnica de Catalunya-Barcelona Tech. As a member of the Information Modeling and Processing (MPI) research group, he focused his research on conceptual modeling, software engineering methodologies, OMG standards, knowledge management, requirements engineering, service science, semantic web and software quality assurance.

Ignacio Lopez

Comparisons are not always hateful

Track 4 | 11:05 - 11:50

In a world as global as the one we live in, it is necessary to know our position in the market.

Some companies carry out comparisons to prove how good they are, others for them to prove that things were done wrong previously and now are done properly.

Some comparisons are hateful but others are necessary. The important thing is to have a clear objective.

We are all used to watch on the television adverts of websites which compare prices of insurance, mortgages, flights, hotels, cars, appliances… but, what comparisons can we do in the software industry?

There are models to compare productivity of companies who develop software, costs, providers. Nearly all comparisons revolve around price, but in Europe we cannot compete with prices of the Asian and Latin American markets and we have to compete with quality.

How do we compare quality of produced software and risks we assume?

In this talk, Ignacio aims to show the power of proper comparisons: what can be compared, what should we be comparing, what shouldn´t be compared, which data can be used to carry out comparisons, but most importantly how should comparisons of software risk of the carried out development to obtain the best outcome and stablish an improvement model more suited to the needs of the organizations in such a way to make them as competitive as possible

Ignacio López

Ignacio Lopez is currently the Director of the Risk Governance Area at LEDAmc (consulting firm specialized in supplier management in terms of quality and productivity): After working for 10 years in Development teams, he has specialized over the last 15 years in subjects related to management of testing software optimization, as Director of SQA in different companies (Meta4. InOutTV….) and leading the Testing software business areas in several consulting firms (Aventia, LEDAmc) and helping to implement the testing process in large companies. Ignacio is a computer engineer by the “Universidad Politécnica de Madrid”. During the last years he has found out the power in the use of Function points related to Testing software and quality to optimize and control processes.

Enrique Sanchez

An internal look into using Appium for mobile app test automation

Track 5 | 11:05 - 11:50

In recent years test automation of mobile devices has become increasingly important and commonplace with many companies choosing to use Appium as a their tool of choice.

In this master class, Enrique Sánchez will talk about how Appium works, the philosophy behind the library, what protocol it uses, how it handles sessions with devices and how it accesses various elements. Enrique will help you to better understand how Appium works and how to be more efficient when working with the library.

Enrique Sánchez

Enrique is a Computer Engineer who has been working for the past 6 years in the world of QA and Testing. Throughout his career he has worked for large companies such as BBVA and Telefónica as well as start-ups like Tuenti and Jobandtalent, combining setting up teams and processes with automation. He is currently Lead QA in Cabify where one of his goals is to add AI to the world of testing.

In his spare time he teaches at U-Tad University while trying to obtain a doctorate in Artificial Intelligence and also co-organises #MADQA.

Israel Rogoza

IoT testing: How to overcome 5 big challenges

Track 1 | 12:00 - 12:45

Gartner says that more than 6.4 billion Internet of Things (IoT) devices will be in use in 2016, and that number will grow to more than 20 billion by 2026. Testing these devices - which range from refrigerators that automatically place orders to the supermarket, to self-driving cars - will be one of the biggest challenges to face device manufacturers and integrators in the coming years. Effective testing is critical. But what's the best approach?

In this talk, Israel will discuss some of the important considerations for testing IoT devices, and give you some vital tips that you can use to help you address these considerations.

Israel Rogoza

Israel Rogoza is an experienced QA & Professional Services Engineer with more than 7 years of enterprise software development and testing. For the last 3 years, he has been the QA Tech Lead at HPE Software responsible for the backend automation and manual testing of the StormRunner Load and LoadRunner load testing products, this role also includes strong awareness and technical commitment to customer needs.

Prior to HPE, he was a Professional Services team leader at NCR.He’s highly experienced and has a solid understanding of a diverse range of business management skills and strives to be successful in any role.

Patxi Gortazar

Experiences using Docker in a complex CI environment

Track 2 | 12:00 - 12:45

Docker container technology has gained a lot of attention in the last couple of years. Using Docker in a Continuous Integration (CI) environment can be a great advantage over using VMs when it comes to complex CI scenarios. Even if containers are not used at deployment, there are many possible outcomes from using Docker for CI.

In this talk, Patxi will tell us about his experience with managing complex CI scenarios using Docker. His idea will be to show how his team used Docker in their tests, how they managed to build complex tests involving several containers, and how they managed the complete infrastructure. He’ll also describe how using this infrastructure is far easier than traditional CI environments.

Patxi will take you on a journey to look at the efforts required to build a CI infrastructure around Docker, showing you what the benefits are, and how to tame the corner cases of using Docker in production.

Patxi Gortázar

Patxi Gortazar is an author and PhD scholar in Computer Science. With over 12 years’ experience in teaching and giving talks at conferences, he works as a DevOps specialist at Kurento, a WebRTC project, managing the CI infrastructure on a large scale.

José Moreno

Cybersecurity and ethical hacking testing

Track 3 | 12:00 - 12:45

Doing nothing is no longer an option, more and more consumers are affected by cybersecurity gaps. Some studies say that one in three consumers would close an online account, or they would stop doing online business with the company they consider responsible, as a result of a cybersecurity failure.

In this talk, José will discuss the good practices to adequately protect the security of systems, giving a review of the different solutions and tools that Cybersecurity Engineers deploy and operate in their day-to-day activities to protect the IT infrastructures of large companies and corporations.

These solutions will help you gain insight into the cybersecurity sector in corporate and work environments.

In addition, José will give an introduction to the different techniques and tools of Web Hacking based on: the origin of vulnerabilities, vectors of attack, operations, distribution channels and mitigating measures.

In order to consolidate the concepts presented in this talk, a real practical case will be analysed, giving recommendations on the phases of a security audit project based on Pentesting.

José Moreno

José Antonio Moreno Galeano is a Technical Engineer in computing from the Faculty of Informatics by the Polytechnic University of Cáceres. With 20 years of experience in the Information Technology sector he has ample experience as a Security Engineer / Pentester acquired biometric security projects as well as with fingerprint identification and computer security audits.

Miguel Rial

How can the cloud help us in our testing strategy

Track 4 | 12:00 - 12:45

The search for faster delivery of new products and services has made the adoption of agile methodologies and the principles of DevOps more and more frequent.

But this increase in the number of releases together with the growing diversity of platforms and channels is a challenge for organisations, which seek to maintain quality while increasing the pace of deliveries.

In this talk, Miguel will explore how cloud solutions can help to face some of these challenges and how it fits into our testing strategy.

Miguel Rial

Miguel Rial holds a Telecommunications Engineering degree from University of Vigo, and has developed his career over the past 17 years in the field of management software, particularly in application lifecycle management solutions sales. Throughout his career he has worked for different companies such as Telefonica I+D, Telefonica Móviles, Mercury, HP and HPE, as test engineer, sales engineer and business developer for development and testing solutions in Spain an Southern Europe. Currently Miguel is Business Consultant for the EMEA Southern region at HPE Software.

Mike Jarred

★ KEYNOTE 2 : The continuous evolution of testing in the FCA

Track 1 | 14:15 - 15:05

This keynote talk will describe some common challenges that face Heads of Testing when they join an organisation. The talk will draw on Mike’s experience of early engagement with his stakeholders and how the FCA Test Group has experienced a huge amount of change in order to provide a modern, efficient and valuable testing service by rethinking its approaches to testing.

Mike will talk about; the importance of stakeholder engagement & understanding the varying risk appetite levels of stakeholders; how we track the effectiveness of the Test Group to deliver against stakeholder’s risk appetite; the importance of understanding value for money and how to visualise this when delivering a testing service to stakeholders; the drivers to changing our Test Governance approach to one of Test Assurance, ensuring a service was delivered that is relevant, targeted, optimal and commensurate with risk appetite

From this keynote, you will recognise the value of good stakeholder engagement when delivering a testing service, and realise how to assure good testing from your suppliers during periods of disruptive change.

Mike Jarred

Mike Jarred is a testing practitioner with over two decades of experience in Software Testing and Quality Management gained in a diverse range of industry domains. Mike works for the Financial Conduct Authority as a Senior Manager leading their Solution Delivery Optimisation team.

He is passionate about optimal methods for delivering software, as well as software testing and using information generated by project teams to improve both software quality, and to initiate organisational improvement. Outside of his work at the FCA, Mike is the Programme Chair for the Assurance Leadership Forum. He also mentors Test Managers, as well as attending and speaking at conferences to continue his education in testing.

Raji Bhamidpati

Pair testing in an agile world

Track 1 | 15:15 - 16:00

In all my years of being a tester, I mostly conducted testing all on my own. Why? I don’t really know. It’s what I have seen others do, and what I did myself. Sure, I will go and ask someone else to review my findings if I couldn’t come to a conclusion. But it never occurred to me that I could pair with someone whilst testing. Until fate took over and presented an opportunity for me to do so.

I was a sole tester on a feature team and after a few months, we had another tester (let's call him John) join us. John had wonderful product knowledge and knew the basics of the project we were working on. Initially John and I started pairing whilst testing to bring John up to speed. However, during this process, we started talking and asking each other questions. We realised that by our testing was of better quality when we paired. This experience has made me take a step back and re-evaluate my skills as a tester. I now consider ‘being able to pair’ as one of the key skills a tester should possess. My testing style has changed considerably after I started pair testing and I now pair with others during various phases of development.

We have all heard of benefits of pair programming and see it frequently being applied in agile teams. However, I haven’t heard quite as much about pair testing! I have read a few papers and blogs on this topic, and have also heard of a couple of talks at conferences. Pair testing can be very beneficial to teams when applied correctly.

In this talk, Raji will show you her tips and suggestions to identify opportunities for pairing with others. She will arm you with the advantages that pairing can present to your team, which you will be able to use to convince other team members to pair. You will discover new ideas on running little experiments on pairing and then use the results of these experiments to evaluate if pairing is right for you or if you are doing it right.

Raji Bhamidipati

Raji Bhamidipati is a Software Tester by heart and a Scrum Master by profession. When Raji first started her career in software testing, she believed that as a tester she would be helping improve quality of products being delivered. Over the recent few years, Raji has realised that the way a team works together has a huge impact on quality and delivery. This realisation triggered a desire to learn more about effective teams. Having worked with some wonderful Scrum Masters, she has found her calling in this role.

Juan Pedro Escalona

Divide and stress: the journey to component load testing

Track 2 | 15:15 - 16:00

A year ago, we only executed performance tests in an integrated environment, which meant every product first went through functional testing, and where performance testing was postponed until the end of the development cycle. This approach very often led us to face two issues: performance issues being discovered too late in the development cycle and having ‘misbehaving’ components that ultimately affected the results from the systems depending on them.

To mitigate the risk of ‘misbehaving’ components reaching the integrated environment, we implemented a component testing system, which allowed our development teams to: 1) ensure that each component was at least performing as it did before making changes to the software and 2) get fast feedback on the performance of the software that was modified.

In this talk, Juan Pedro will take you on a journey from the initial concept his team had and the requirements they had to meet, up to the how they were testing their latest release. He will show you how their implementation facilitated easier management of the performance test effort, how it reduced resource consumption and how they were able to define multiple component test scenarios.

Juan Pedro Escalona

Juan Pedro Escalona is a DevOps Administrator who has a keen interest in Open Source projects. He specialises in virtualization (EC2, OpenStack, KVM) and networking, and is also a Python and Django framework development enthusiast.

Colm Fox

How to measure quality in today's dynamic projects

Track 3 | 15:15 - 16:00

In this talk, Colm Fox , will show us how to measure the quality of the software, what information to collect and what to do with that information, making the parallelism between the concepts of "Data Intelligence" and "Business Intelligence".

Colm will use Kusco Analitycs from Morphis, to teach how you can automatically collect and cross-reference information from multiple sources, such as application sources, "versioners", the dictionaries of the data in the databases, the repositories of the managers of incidences of projects, etc.

And based on these crossings, with Kuscos, can cover aspects such as ensuring adequate quality control, impact analysis, technical debt calculation, automatic documentation and provide a wide variety of metrics and quality indicators, all about a graphically attractive view based on dashboards, easy to customize, with several reporting and breakout possibilities.

In addition to how strategies can be implemented to repair technical debt, based on automatisms and the detection of points that once corrected produce very significant improvements in applications (hot spots) in a perspective of 80% improvements with 20% of effort.

Colm Fox

For the last two years, Colm has been Head of Business Development for EMEA with Morphis, a global technology provider for both legacy and current systems analysis, transformation, development and quality certification. This has seen him involved in many high value legacy application modernisation projects with a strong testing and QA component of systems developed 10-30 years ago in legacy environments that have been digitally transformed into modern architectures and environments.

Colm has also been involved in developing Morphis’ strong technology partnerships with leading technology and professional services providers including Microsoft, IBM, Fujitsu, HPE and Accenture. This has given him first-hand knowledge and experience of how best practice is applied in leading organisations around the world.

Jose Rodriguez

Continuous Testing to enable Continuous Delivery

Track 4 | 15:15 - 16:00

In recent years, test automation has evolved from a regression test automation approach through to continuous integration, to becoming one of the key elements in a continuous delivery strategies.

Continuous delivery optimises the time-to-market of products, speeding up the software development lifecycle to the maximum and enabling continuous releases in production environments.

In this talk, José Andrés will show us the continuous testing approach that Softtek testing teams follow to make continuous delivery of software possible. He will show you that with a suitable combination of different ingredients, such as agile development, automation of unit tests, GUI testing and API testing, with a special emphasis on API testing, tools to restore and generate test data, environment dockerization, mocking, etc. as well as with a test automation strategy focused on covering business requirements and the main functionalities of the products, continuous delivery is possible!

José Andrés Rodríguez

José Andrés Rodríguez is a Telecommunications Engineer from the Universidad Politecnica in Valencia and has over 15 years of experience in the testing world. He has participated in many test automation projects, using different tools and frameworks. He has created automated tests, delivered consulting services, implemented testing methodologies and continuous integration. As well as created continuous delivery processes, provided training and enabled test teams both with on-site and off-shore teams. José Andrés is currently working as a Delivery Manager at Softtek.

Sonali Patro

Mobile Test Automation using open source

Track 1 | 16:30 - 17:15

Mobile test automation is a challenging area in most software organisations and although there are a lot of tools available in the market, selecting the right tool for your testing needs is not an easy task.

During development, unit testing is generally carried out by developers, but such tests are somewhat limited and in detecting bugs that at a later stage could affect your mobile platform.

In this talk, Sonali will walk you through combining white box testing with open source software (OSS), which any organization can adopt to unearth mobile platform defects from a very early stage.

She will focus on various aspects of automating generic environments, where android, windows and other platforms are not tightly coupled together and can be tested agnostically.

Sonali Patro

Sonali Patro is an engineering graduate in Information Technology. After graduation from Bijupatnaik University (Orissa) in 2007, she worked as a lecturer in an engineering college, then with IBM as a Trainee Engineer and then with Symphony and Tangoe. She is currently Senior Test Engineer at Happiest Minds Technologies.

Christiane Melo

A practical experience on teaching software testing to people with disabilities

Track 2 | 16:30 - 17:15

This talk describes an experience of teaching software testing to blind people, deaf people and people with intellectual disabilities, to promote digital inclusion of these people in the global market.

Christiane will talk about a methodology of teaching software testing, associated with assistive technologies that allows students to comprehend the discipline.

She’ll tell you her story about how it was possible to train-up disabled people to test software as well as what companies can expect when hiring people with disabilities.

Christiane Melo

Christiane Melo has a Master's Degree in Educational Sciences and is a specialist in Educational Informatics (FAFIRE). She has over 20 years’ experience in Educational Computing and in Educational Technology at the Bradesco Foundation in Positivo Informática and the Center for Telehealth (UFPE) Acted in Elementary Education, Middle and Upper as Applied Informatics Professor in Information Systems courses, Administration and optometry.

She currently works for the NGO Integrarte with ICTs for Youth with Intellectual Disabilities and is partner and founder of t-access testing and accessibility, acting as a Systems Analyst in Educational and Business Accessibility.

Eduardo Riol

Testing Tools in the Ages of DevOps and Agile.

Track 3 | 16:30 - 17:15

Collaboration, integration, agility, automatization… the way we test is evolving and enriched from good practices DevOps and Agile methodologies.

The tools that we use in order to get support in this change are crucial: they define our integration capabilities, the way in which we may use collaborative tools and our capability to implement different specification, testing and development methodologies.

In order to support a modern vision of Testing in the ages of DevOps and Agile, these tools should support the implementation of methodologies such as Test Driven Development (TDD), Behavior Driven Development (BDD), test automation, exploratory testing and the collaboration among the members of the team.

In this talk we will revise the characteristics that we want our Testing tools to have and we will analyze three Test Management Tools: Tarantula, Zephyr + Jira and qTest Platform. We will compare their functionalities oriented to collaborative, agile and automated environments and the migration capabilities from more traditional testing environments offered by these tools.

Eduardo Riol

Eduardo Riol is Technical Leader at the Centre of Excellence on QA & Testing in atSistemas. He coordinates and executes services related to Software Quality Assurance and Test Automation. He has previously worked for several technological organizations in the definition and integration of Agile Methodologies for development and testing. He is currently interested in the control of technical debt, BDD and QA integration in Agile and DevOps environments. He is Software Engineer by Valladolid University.

Javier Lisbona

Supporting DevOps: Virtualization Services Demo.

Track 4 | 16:30 - 17:15

Service Virtualization focus on a key problem most organizations face: the time, and resources required to set up and manage test environments. Traditionally, people have been running around installing hardware, setting up application servers, database servers, installing application software, configuring all of that. Not only this is very capital intensive problem, but as environments have become more and more complex, this is also a very error prone process that typically involves a lot of scrap and rework. Service Virtualization enables organizations to address that problem by Virtualizing complete stacks of software, hardware, and services, enabling developers and testers to stand up test environments in a matter of minutes vs weeks, to do that whenever they want, and in effect, start their testing much earlier than what has been traditionally possible. It can help organizations transform the way they deal with software quality by:

• Better managing their costs: Reduce hardware, software and labor costs associated maintaining complex test environments.
• Improve test cycle time: By reducing wasted time spent waiting on the availability of and setting up test environments.
• Better manage risk in delivering software: By doing testing earlier, organization can avoid late stage integration issues (Shift Left).

Please, join me in this talk about how to run visualizations services and where you can apply it in your company.

Javier Lisbona

Javier Lisbona is a Computer Engineer who joined IBM in 2008 with Telelogic ®acquisition. Before joining IBM, he worked as a senior consultant of prerequisites and testing management in the aerospace industry for one year. When joining IBM, he worked for 4 years as Technical IT Services Professional within Rational Software business unit for Spain, Portugal, Greece and Israel. From 2012, Javier is a Client Technical Professional in IBM. Javier is a technical expert on application lifecycle management, with focus on prerequisites and testing management. He has worked in outstanding projects for industries such as aerospace, defence, public sector, etc..Currently, he is working for large companies like BBVA or Banco Santander.

Paul Gerrard

THE GREAT DEBATE : Industry vs Experts sponsored by HPE

Debate | 17:25 - 18:25

A great debate mediated by our program chair, David Evans, where selected people from the industry and professional sectors, along with the audience, form opposing sides to give their opinions on a series of statements that affect today’s Software Testing and Quality Assurance. Find out more here.

Organised By
nexo QA