Speakers List

hoberman

Monday, 5 March
8.30 am, Room 1

Data Modeling Fundamentals

Assuming no prior knowledge of data modeling, we start off with an exercise that will illustrate why data models are essential to understanding business  processes and business requirements. Next,we will explain data modeling concepts and terminology, and provide you with a set of questions you can ask to quickly and precisely identify entities (including both weak and strong entities), data elements (including keys), and relationships (including subtyping). We will discuss the three different levels of modeling (conceptual, logical, and physical), and for each explain both relational and dimensional mindsets.

Steve Hoberman 

Steve Hoberman & Associates, LLC

Steve Hoberman has trained more than 10,000 people in data modeling since 1992. Steve is known for his entertaining and interactive teaching style (watch out for flying candy!), and organizations around the globe have brought Steve in to teach his Data Modeling Master Class, which is recognized as the most comprehensive data modeling course in the industry.

Steve is the author of nine books on data modeling, including the bestseller Data Modeling Made Simple. One of Steve’s frequent data modeling consulting assignments is to review data models using his Data Model Scorecard® technique. He is the founder of the Design Challenges group and recipient of the 2012 Data Administration Management Association (DAMA) International Professional Achievement Award.

wells

Monday, 5 March
8.30 am, Room 2

Data Governance in Self Service World

Conventional data governance practices come from a simpler time when data management was free from many of today’s challenges, such as self-service reporting and analytics.

Traditional data governance focuses on enforcement of controls and gates, which will continue to be necessary. However, these methods must be complemented with support for the autonomy and agility of the self-service world. Enforcement works together with prevention. Guides and guardrails reduce the need for gating. The need to exercise controls is minimized when curating, coaching, crowdsourcing, and collaboration are integral parts of governance processes. In a self-service world, every data stakeholder plays a part in data governance.

You Will Learn:

  • Where governance fits within modern data ecosystems, from point of ingestion to reporting and analysis
  • How various technologies support governance through the ecosystem
  • Process challenges for governing self-service; supplementing controls with collaboration and crowd sourcing
  • Engagement models for governing self-service
  • Organizational challenges for governing self-service; moving from data stewards to stewardship, curation, and coaching
  • Operational challenges for governing self-service; implementing a combination of gates, guardrails, and guides.

Dave Wells  

Eckerson Group

Dave Wells leads the Data Management Practice at Eckerson Group, a business intelligence and analytics research and consulting organization. Dave works at the intersection of information management and business management, where the real value is derived from data assets. He is an industry analyst, consultant, and educator dedicated to building meaningful and enduring connections throughout the path from data to business value. Knowledge sharing and skills development are Dave’s passions, carried out through consulting, speaking, teaching, and writing.

He is a continuous learner – fascinated with understanding how we think – and a student and practitioner of systems thinking, critical thinking, design thinking, divergent thinking, and innovation.

port

Monday, 5 March
8.30 am, Room 3

Fact-Based Data Integration: Matching and Transformation

Fact-Based Modeling is an effective approach for defining the structure and semantics of stored data. Data integration involves combining data residing in disparate sources into meaningful and valuable information. Data integration faces two key challenges: matching records from different sources corresponding to the same real-world entity, and transforming the associated data so that the combined information is meaningful. We show that a fact-based integration language can be tightly coupled with a data matching system and compiled to data transformation code for data integration.

We will apply this approach in two scenarios:

  • a Data Vault data warehouse environment, and
  • a project for sharing information nationally for child safety

Dr. Graeme Port 

Factil

Dr. Graeme Port has been an innovator and leader in data architecture and enterprise software product development for over 30 years. Graeme was co-founder, head of engineering and CTO at ManageSoft, which built market-leading products in business intelligence, application development and application deployment. Graeme has consulted extensively in data architecture in government and commercial sectors.

Graeme received his PhD from the University of Melbourne in the field of Logic Programming.

KEYNOTE SPEAKER

Eddie Sayer

Monday, 5 March
12.00 noon, Room 1

Lessons NOT Learned – Dealing with Habitual Architecture Mistakes

When I was a child, my parents instilled in me the belief that no one should make the same mistake twice. “Once is a mistake; twice is a habit,” they would say.

Yet, I see the same architecture mistakes habitually repeated in numerous organizations. The result is far too many architectures characterized by excessive costs, supportability issues, design inflexibilities and general business dissatisfaction.

Bad habits are hard to break, but it can be done. And it can be done NOW!

This keynote examines habitual architecture mistakes most organizations make and explores best practices for avoiding the mistakes. It concludes with proven recommendations for achieving architecture success. You will leave the session not only better educated on architecture but armed with ideas for architecting high-quality solutions in your own environment.

Eddie Sayer  

Enterprise Data Management, Teradata

For over two decades, Eddie has been helping large organizations gain sustainable competitive advantage with data. He has worked in both leadership and individual contributor roles, specializing in enterprise data management programs and information architecture. Eddie joined Teradata in 2008 and has since conducted numerous engagements with some of the most recognized companies in the world, including Wells Fargo, Verizon, T-Mobile and Coca-Cola, helping to set direction for data management.

Prior to joining Teradata, Eddie was an Information Architect at CheckFree Corporation, the largest provider of e-billing and electronic bill payment in the US. Previously, Eddie held similar positions at Macys Systems & Technology and Inacom Corporation. Eddie is a founding member of the Georgia chapter of Data Management International (DAMA) and is a frequent speaker at industry events.

andy peyton

Monday, 5 March
2.15 pm, Room 1

The Impact of Agile on Enterprise Data Development

The current trend in system development is the use of Agile techniques. However, most literature and guidance only discusses this as an application development technique with no mention of Agile in the modelling and design of databases. Data practitioners have very little published industry best-practice to rely on.

The Agile problem is further complicated when you consider differences between a simple application database and the need to develop an enterprise’s principal data store. This may be the database that holds the enterprise’s “crown jewels”, or the database that underpins the key activities of the organisation.

IP Australia is in the process of its largest software development in 20 years using Agile development practices. This project draws together data from four separate lines of business into a single harmonised database accessed by a single new back-end application. In support of this, IPA has had to evolve a successful approach to new data development that integrates with the Agile software development method.

This presentation will discuss the lessons learn in the agile data development process – the good and the bad, and ranging from the initial data modelling through to the deployment of physical database structures.

By the end of the presentation you’ll understand some of the key challenges when designing and developing new databases to support agile system development and will be able to avoid some of the major pitfalls.

Andy Peyton 

IP Australia

Andy Peyton is a Senior Solutions Architect for IP Australia. IP Australia is responsible for the issue and management of Patents, Trade Marks, Designs, and Plant Breeder Rights within Australia. Andy has worked for many years in various data management roles for different government departments and is currently leading the team in the design and development of the new database environment that will underpin IPA’s systems for the next 20 years.

Andy has previously worked in organisations such as Centrelink, Defence, Health & Ageing, Defence Housing Authority, Immigration, and the ATO. As a result, he has a keen understanding of the need for designing databases that meet the long-term needs of government departments where “applications come and go, but the data goes on forever”.

Andy has a Bachelor of Science degree from the University of Sydney and a Master of Management Economics from the University of NSW. He is a senior member of the Australian Computer Society and a member of DAMA Canberra.

giles

Monday, 5 March
2.15 pm, Room 2

Introduction to Data Vault: A data practitioner’s view

This session is intended to be a primer for those who haven’t yet encountered the wonderful world of Data Vault. Topics will include:

 The “sales pitch” for Data Vault

  • The why, where, when of using Data Vault
  • Positioning Data Vault with regard to Inmon and Kimball data warehouses

•  The Data Vault building blocks:

  • Hubs
  • Links
  • Satellites
  • There’s more than Raw Data Vault
  • Business Data Vault

  Why the distinction is important
  Point In Time tables
  Bridge tables

  • Virtual data marts
  • Operational Data Vault: a variation on the central theme

  Common data modeling challenges, including

  • Transactions
  • Reference tables
  • Duplicates and Same-As-Links
  • Hierarchical links
  • The Flip-Flop effect

  Data Vault 2.0

  Where to next?

John Giles  

Country Endeavours

John Giles is an independent consultant, with a passion for seeing ideas taken to fruition. For 3 decades his focus has been on data modeling and architecture, with a more recent interest in Data Vault modeling. He has worked in IT since the late 1960s, across many industries. He is a Fellow of the Australian Computer Society, and completed a Master’s degree at RMIT University, with a minor thesis comparing computational rules implementations using traditional and object-oriented platforms.

He is the author of “The Nimble Elephant: Agile Delivery of Data Models Using a Pattern-based Approach”.

schotsmans

Monday, 5 March
2.15 pm, Room 3

Which came first, the data model or data governance?

Starting an enterprise data model without data governance is a hopeless task.

Most companies understand the need for data governance but are often reluctant to install an additional layer of bureaucracy to achieve it. At a minimum, standards should apply to variable naming and also formatting standards for diagrams of logical or physical data models. By the same token, you also need standardized structure for abbreviations, acronyms and documentation requirements. This will make work easier and more efficient for existing employees, and it will dramatically lower the ramp up the effort for new employees and contractors.

Standards are a reflection of the collaboration between business and IT. Data governance activities should be led and executed by business people, and the end results used by data modelers, architects, and business analysts alike. How can you build a data model when you don’t understand the meaning of critical business data? Data governance is a perfect example of how business and IT strive for mutual goals: creating standards and guidelines to support the enterprise.

In this presentation, we will discuss how to better align both skills to better serve the business strategy.

Ivan Schotsmans

BI-Community.org

Ivan Schotsmans is principal and founder of BI-Community.org. He has more than twenty-five years of information management experience in various industries.

Throughout his career Ivan has focused on providing straightforward solutions to business and technical problems for International companies with a focus on data warehousing, business intelligence and information quality. He is recognized as subject matter expert in data modeling, information quality and agile business intelligence.

Ivan is also (co-)founder and active member of several global organizations (TDWI Benelux Chapter, DAMA, IAIDQ, among others) and for two years he acted as Global Director for IAIDQ. Ivan frequently speaks at information management industry conferences and teaches on graphical facilitation, data warehousing, data modeling and new information management trends.

DATA MODELING ZONE Australia

Tuesday, 6 March
3.45 pm, Room 1

Dynamic Data Modeling

Most data models are static, in that they represent the properties of, and relationships between, business entities at a point in time. However, for a system to properly function over time, its data model must be designed to support data update in response to changes in the real world. Dynamic Data Modelling covers not only static data structures but update policies, by considering issues such as

• what real-world changes must be captured in the database?
• what are the requirements for preserving a record of the historic state of the attributes and relationships of any entity?
• why must changes in attributes and changes in relationships be dealt with differently?
• do we also need to record changes in our state of knowledge of the real world?
• what aspects of the time dimension need to be taken into account?

This presentation provides an overview of the Dynamic Data Modelling toolkit, with which experienced data modellers can effectively support projects delivering BI or operational data resources with a significant time-variant component.

Graham Witt  

Ajilon

Graham has over 30 years of experience in delivering effective data solutions to the government, transportation, finance and utility sectors. He has specialist expertise in business requirements, architectures, information management, user interface design, data modeling, database design, data quality and business rules. He has spoken at conferences in Australia, the US and UK and delivered data modeling and business rules training in Australia, Canada and the US. He has written two textbooks published by Morgan Kaufmann: “Data Modeling Essentials” (with Graeme Simsion) and “Writing Effective Business Rules”, and has written two series of articles for the Business Rule Community (www.brcommunity.com).

Lloyd-DMZ Australia

Monday, 5 March
3.45 pm, Room 2

Putting your enterprise data model to work

Many organisations have enterprise data or information models. They are used for wall posters or presentations and could be updated periodically by consultants, but otherwise are often inert.

We might ask why this is so? Does your enterprise data model improve your customer experience, increase access to information, or improve system development productivity?

In this 40-minute discussion, Lloyd will:

  • Generate an enterprise data model from primary artefacts.
  • Use the data model to refine the scope of a major programme initiative.
  • Use the data model to review and align requirements for a project.
  • Use the data model to review and align requirements for a project.
  • Use the data model to generate a user experience.

Lloyd will work with us at speed to outline the possible Swiss army pocket-knife functions of an ordinary data model, using a case study based on a true story – simplified only to meet time constraints.  By the end of this talk, participants will understand practical ways to generate and apply an enterprise data model for business value, and where to look for business case benefits.

Lloyd is a principal of Robinson Ryan, a contributor to great successes and participant in extraordinary failures. At any time, he will directly be contributing to investments collectively worth over $500M. His expertise is in data strategy and architecture. He brings order to chaos, calm to turmoil and sensibility to human activity.

Lloyd Robinson, CDMP  

Robinson Ryan

Lloyd has over 20 years of practical experience across strategic, architecture consulting and line management roles up to direct responsibility for a ₤50 Million budget. He has brought practical improvements to business and project execution across financial services, utilities, education and government.

Critical to his approach is not only his ability to “shape” a solution, but also the team and organisation to deliver it. He maintains a focus on the business benefits and demonstrates care and patience in achieving a desired change. He has worked across four continents including multilingual situations. He is trained in Benefits Management, ITIL, Enterprise Architecture, Program Management, Training and Counselling.

Sue-Geuens - DATA MODELING ZONE Australia

Monday, 5 March
3.45 pm, Room 3

DAMA Certified Data Management Professional Examination Primer

Find out about DAMA International’s Certified Data Management Certification designation, from one of the most qualified people to talk about it – the DAMA International President. Sue will deliver a Primer Session for those sitting the examinations during the conference.  She will cover the various pathways to become certified and what to do to retain your certification. The session prepares you for the examination by reviewing the scope of the question that may be asked from the DAMA Guide to Data Management Body of Knowledge (1st Edition) and practising some exam questions to build your confidence

Sue Geuens 

President,  DAMA International

Sue Geuens started in Data Management during 1996 when she was handed a disk with a list of builders on it and told they were hers to manage. Sue mentions this as fate taking over and providing her with what she was “meant to do”. Various data roles later, her clients numbered 3 of the top 4 banking institutions in SA, a number of telco’s and various pension funds, insurance companies and health organisations. Sue was the initial designer of data quality matching algorithms for an SA built Data Quality and Matching tool (Plasma Mind).

This experience stood her in good stead as she slowly but surely climbed the ladder in Southern Africa to become the first CDMP in the country. Sue worked tirelessly on starting up DAMA SA holding the Inaugural meeting in February of 2009 as Chapter President and held the position until the end of 2015. As current President of DAMA International, she has bee active in supporting the recognition of data management and the range of expertise required by an information professional

Dave Fauth

Tuesday, March 06, 2018

8.30 am, Room 1

Introduction to Graph Databases and Graph Data Modeling with Neo4j

How do I start with Neo4j? What is Cypher? Are graph databases only suitable for graph-based domains like social networks? Or can I use it reasonably in enterprise projects? How do I transform my domain into a graph model?

This tutorial will answer these questions with a mixture of theory and hands-on practice sessions. Attendees will quickly learn how easy it is to develop a Neo4j-backed application.

Skills taught:

  • An understanding of graph databases
  • How to use graph databases
  • Introduction to data modeling with Graph databases
  • How to apply the property graph to common modeling problems
  • Common graph structures for modeling complex, connected scenarios
  • How to get started working with Neo4j.

Dave Fauth

Neo4j

Dave joined Neo4j 2 1/2 years ago working as a Field Engineer on the East Coast. He has been working with Graph Databases and Neo4j for over 4 years. He has been a speaker at the GraphConnect conferences and various meetups. Prior to joining Neo Technologies, Dave worked in the Intelligence Community providing data strategy and data exploitation support.

Eddie Sayer

Tuesday, 6 March
8.30 am, Room 2

Next Generation Analytic Architecture – A Practical Approach

The data and analytics landscape is evolving at an astonishing pace. The number
of ‘big data’ technology alternatives is staggering. The momentum of open-source
Hadoop is undeniable. The marketplace offers the promise of next generation
analytic architectures to increase analytic agility, optimize TCO and drive greater business value.

Success requires your role evolve from conventional ‘data architect’ to ‘data ecosystem architect’. This workshop provides a structure to help you make the transition. We will explore practical steps for modernizing your analytic architecture by leveraging emerging technologies and employing data architecture patterns for acquisition, integration and access. Best practices will be discussed for architecting the next generation analytic ecosystem by implementing data lakes and data products, as well as integrating heterogeneous data components, such as relational databases, NoSQL databases and opensource Hadoop. You will leave the session not only better educated on analytic architecture key themes and concepts, but armed with a practical approach for modernizing the analytic architecture in your own organization.

Eddie Sayer Teradata

Teradata

For over two decades, Eddie has been helping large organizations gain a sustainable competitive advantage with data. He has worked in both leadership and individual contributor roles, specializing in enterprise data management programs and information architecture. Eddie joined Teradata in 2008 and has since conducted numerous engagements with some of the most recognized companies in the world, including Wells Fargo, Verizon, T-Mobile and Coca-Cola, helping to set the direction for data management.

Prior to joining Teradata, Eddie was an Information Architect at CheckFree Corporation, the largest provider of e-billing and electronic bill payment in the US. Previously, Eddie held similar positions at Macys Systems & Technology and Inacom Corporation. Eddie is a founding member of the Georgia chapter of Data
Management International (DAMA) and is a frequent speaker at industry events.

giles

Tuesday, 6 March
8.30 am, Room 3

DIY Corporate Data Model: Develop your own corporate data model framework in 3 hours, using patterns

In this fast-changing world, two demands are being made of us. Firstly, the business wants fast (read “Agile”) delivery time frames. Secondly, they want today’s solutions to be well-architected so they don’t get thrown away tomorrow. There might appear to be a conflict to have the business demand fast and good, but in this world of executive accountability, you’d better deliver.

In many situations, the foundation for IT delivery is a robust and extensible corporate data model (CDM). And I mean one that can actually deliver business value, not one that costs a fortune but only sits on the shelf to be worshipped by passing data modelers.

Can a useful CDM be delivered in weeks, or even days or hours? By employing the proven data model patterns of David Hay, Len Silverston, and others, the resounding answer is, “Yes.” I’ve done it again and again, generating sufficient detail to remove roadblocks. This session will not only prove it is possible, but some of you will walk away with your own CDM foundation!

This session is going to be hands-on. Volunteers will be invited to offer their company as a real-life workshop exercise, and then the participants will collectively nominate which ones to tackle. The lucky ones chosen will end up with a CDM framework plus some supplementary material to facilitate subsequently growing their foundation into a viable, light-weight (yet robust) CDM. The goal for the rest of the participants will be to walk away with the skills and confidence, and the same supplementary material, to be applied within their home base.

For any number of reasons, not all participants may be able to present their organization as candidates for team-based CDM development. However, for those who can, participants are asked to bring a list of their major data-related “pain points” to share. The reason is simple – any CDM data initiative should focus on where we can deliver tangible value.

John Giles  

Country Endeavours

John Giles is an independent consultant, with a passion for seeing ideas taken to fruition. For 3 decades his focus has been on data modeling and architecture, with a more recent interest in Data Vault modeling. He has worked in IT since the late 1960s, across many industries. He is a Fellow of the Australian Computer Society, and completed a Master’s degree at RMIT University, with a minor thesis comparing computational rules implementations using traditional and object-oriented platforms.

He is the author of “The Nimble Elephant: Agile Delivery of Data Models Using a Pattern-based Approach”.

KEYNOTE SPEAKER

wells

Tuesday, March 05
12 Noon

Room 1

Know Your Data – The Keys to Unlocking the Value of Data

Data delivers value only when it is used to create or accelerate positive results without assuming undue risks. The spectrum of data uses, ranging from business audit trail to advanced analytics, is rich with value opportunities. But getting from an opportunity to impact depends on the depth of data knowledge. Undertaking BI and analytics is especially risky when undertaken without fully understanding the data. Variety and abundance in the world of big data raises the bar for those who are responsible to capture and communicate data knowledge. Today, four essential skills are at the core of unlocking data value:

  • Data Modeling
  • Data Profiling
  • Data Curating
  • Data Cataloging

Dave Wells  

Eckerson Group

Dave Wells leads the Data Management Practice at Eckerson Group, a business intelligence and analytics research and consulting organization. Dave works at the intersection of information management and business management, where the real value is derived from data assets. He is an industry analyst, consultant, and educator dedicated to building meaningful and enduring connections throughout the path from data to business value. Knowledge sharing and skills development are Dave’s passions, carried out through consulting, speaking, teaching, and writing.

He is a continuous learner – fascinated with understanding how we think – and a student and practitioner of systems thinking, critical thinking, design thinking, divergent thinking, and innovation.

David-Wiebe

Tuesday, 6  March
2.15 pm, Room 1

Conceptual Data Modeling Drives Consistent Data Delivery

The value of the conceptual data model is extracted every time it is used. There is no shortage of leveraging this fundamental data asset

The amount of business understanding captured within the conceptual model is immense. It holds definitions of business concepts and individual data items, it records rules on how data concepts relate to each other, it documents which scenarios bring data concepts together, it keeps track of data subjects that get regularly referenced by the business, and how these data subjects are reusable components that serve the business architecture, and where they manifest themselves across the application architecture.

In software delivery, it is contributing to the implementation of consistency of data management within information systems, all sorts of information systems.

This one-hour session will begin with a conceptual data model and use it to design the schemas for an online transaction processing database, a piece of an enterprise data warehouse, a data mart and a message payload for the service object.

See how investing in data modelling returns value in information delivery.

David Wiebe

Robinson Ryan

David Wiebe is a CDMP and has over 25 years of business experience covering; consulting, systems development and integration. He practices a solution architecture approach for designing and delivering information systems that utilise services and data architectures to implement business functionality that also aligns to an enterprise architecture framework.

He is experienced in data modelling, business process management, database management and software development. He has worked in projects where strategic enterprise designs were implemented for large-scale application development efforts. He is a practitioner of the Zachman Enterprise Architecture Framework, a contributor to Standards Australia 2006 AS 4590 – Interchange of Client Information, a co-author of the Queensland Government Information Portfolio Framework and has worked with IBM’s IFW standard for financial services. He is an expert erwin user.

David has created and evolved enterprise data and information models for the Commonwealth Bank of Australia, AMP, ANZ Bank, BT Financial Group, the Australian Tax Office, the Department of the Environment and Energy, the Australian Securities and Investments Commission and Queensland Government. This included developing information and data standards and policies, roadmaps and guidelines for aligning the information architecture with the enterprise architecture keeping the relevant artefacts up to date.

Tuesday, 6 March
2.15 pm, Room 2

How to Grade a Data Model

I have been using the Data Model Scorecard® to validate data models for over 15 years. Over the past year, I have built a free online tool that will help you assess and score your own models. This tool takes the form of a decision tree, where over 150 questions are asked to “score” a model from Poor to Excellent. This session covers the ten Scorecard categories, the decision tree required to review a model (and which is embedded in the tool), and the “Top 5” questions that can make or break a model. You will then grade a data model using the online tool.

Steve Hoberman 

Steve Hoberman & Associates, LLC

Steve Hoberman has trained more than 10,000 people in data modeling since 1992. Steve is known for his entertaining and interactive teaching style (watch out for flying candy!), and organizations around the globe have brought Steve in to teach his Data Modeling Master Class, which is recognized as the most comprehensive data modeling course in the industry.

Steve is the author of nine books on data modeling, including the bestseller Data Modeling Made Simple. One of Steve’s frequent data modeling consulting assignments is to review data models using his Data Model Scorecard® technique. He is the founder of the Design Challenges group and recipient of the 2012 Data Administration Management Association (DAMA) International Professional Achievement Award.

heath

Tuesday, 6 March
2.15 pm, Room 3

Managing Models and Meaning

To manage something, we have to understand it. But what do we mean when we say we ‘understand’? Is it possible that two people both understand a thing, but do not understand each other’s point of view? How certain can we be that we have a shared understanding of a given problem, so we can manage it together? We review historical theories of mind and the nature of knowledge, then approach these questions using Modelling. The details of a model require data, but before we can make decisions about data, we must first agree on words and phrases. In the process of reaching that agreement, we use a metaphor, and we make several kinds of model.

Clifford Heath

Infinuendo

Clifford Heath is a software innovator, toolmaker, product architect and designer. Clifford is a masters level CDMP, a Fact-Based Modeling expert participating in the Working Group for standardisation, the inventor of the Constellation Query Language.

Clifford has applied Fact-Based Modeling to a wide range of systems development and data integration environments that have all contributed to the breadth of his approach. Clifford has presented and published at international scientific and data conferences, workshops and events.

DATA MODELING ZONE Australia

Tuesday, 6 March
3.45 pm, Room 1

Business-friendly Data Models

If a system is to support an enterprise’s business information requirements, a necessary part of the design process is an effective review of the design by appropriate business stakeholders. For such review to be effective, the design documentation provided to those stakeholders (“the business model”) must:

  • be understandable
  • be complete, i.e. depict all information in which business stakeholders are interested
  • not depict any information in which business stakeholders have no interest (“noise”) which distracts or confuses reviewers, reducing review effectiveness.

Logical data models do not meet these criteria, yet most conceptual data models are degenerate logical data models including much noise. This presentation details what should be included in and excluded from a data model to be reviewed by business stakeholders.

Graham Witt  

Ajilon

Graham has over 30 years of experience in delivering effective data solutions to the government, transportation, finance and utility sectors. He has specialist expertise in business requirements, architectures, information management, user interface design, data modeling, database design, data quality and business rules. He has spoken at conferences in Australia, the US and UK and delivered data modeling and business rules training in Australia, Canada and the US. He has written two textbooks published by Morgan Kaufmann: “Data Modeling Essentials” (with Graeme Simsion) and “Writing Effective Business Rules”, and has written two series of articles for the Business Rule Community (www.brcommunity.com).

Sue-Geuens - DATA MODELING ZONE Australia

Tuesday, 6 March
3.45 pm, Room 2

Grooming Data Modelers

Very often people fall into the data world – almost by accident. But, once in, how on earth do we as a “newbie” Data Management professional move forward? And what do our employers need to do to make sure that we actually fit into the new world we are navigating?

Sue has spent more than 20 years now in Data Management, is a self-confessed “dataholic”, totally loves what she does and is always more than willing to share her knowledge and expertise with anyone who asks.

In this session you will learn:

  • What are the requirements for being a “good” data modeller?What would make you a “bad” data modeller?
  • How can we change what we are doing and get on the right track?
  • How do we leverage off what other DM professionals are doing?
  • What are the skills and expertise that we feel we need and how are we going to acquire them if we don’t already have them?

This workshop has spun off from DMZ US 2017 in which many questions were asked about the people in Data Modelling and what they really need to be and do to make that difference AND to bring Data Modelling to the forefront where it is acknowledged as not just nice to have but imperative to do. Join Sue and find out about the people side – there will be no data modelling done, but you will walk away with a new appreciation of exactly how important you and your skills are in this new datacentric world we are living in.

Sue Geuens 

President,  DAMA International

Sue Geuens started in Data Management during 1996 when she was handed a disk with a list of builders on it and told they were hers to manage. Sue mentions this as fate taking over and providing her with what she was “meant to do”. Various data roles later, her clients numbered 3 of the top 4 banking institutions in SA, a number of telco’s and various pension funds, insurance companies and health organisations. Sue was the initial designer of data quality matching algorithms for an SA built Data Quality and Matching tool (Plasma Mind).

This experience stood her in good stead as she slowly but surely climbed the ladder in Southern Africa to become the first CDMP in the country. Sue worked tirelessly on starting up DAMA SA holding the Inaugural meeting in February of 2009 as Chapter President and held the position until the end of 2015. As current President of DAMA International, she has bee active in supporting the recognition of data management and the range of expertise required by an information professional.

Dusseldorf

Tuesday, 6 March
3.45 pm, Room 3

Advanced Data Vault Tips and Tricks

Roelant Vos, Allianz Worldwide Partners Data Vault patterns. They seem almost deceptively easy when you first look at them, but there is more to them than first meets the eye.
This session intends to look beyond the initial Hub, Link and Satellite approaches and will explain what other considerations there are to be aware of, to help define a truly resilient and flexible Data Warehouse solution.

The Data Vault methodology provides some elegant handles to develop your Data Warehouse – the various required Data Warehouse mechanics are organised in a way that allows for a flexible solution. But you’re still delivering an Enterprise Data Warehouse and the associated complexities will need to be addressed somewhere.

When all these necessary considerations have been incorporated, the patterns start to look a fair bit more complicated. The reasoning for this, and the modelling and implementation choices you have to make along the way – as well as their consequences – will be discussed in this hour of advanced Data Vault tips & tricks.

Contents:

  • Data Vault is about supporting the business in defining its requirements over time. But is it realistic to get the data model itself right in one attempt? How enhanced templates help you refactor.
  • Things you should know about the Hub, Link and Satellite patterns beyond the basics and their position in the overall design.
  • Loading multiple changes in on go using record condensing and change merging for Satellites. When is a change a change?
  • Getting data out again – ‘time flattening’ in PIT and Dimension tables.
  • The role of the ETL control framework and Referential Integrity.
  • Lessons learned – my top 5 favourite mistakes over more than a decade of Data Vault implementations.

Roelant Vos

A Sal Dusseldorf

Roelant Vos has been active in Data Warehousing and Business Intelligence for more almost 20 years and is working for Allianz Worldwide Partners as the General Manager for Business & Customer Insights in Brisbane, Australia. In a role that is highly focused on analytics, Roelant is working on collecting, integrating, improving and interpreting data to support various business improvement initiatives.

Passionate about improving quality and speed of delivery through model-driven design and development automation, he has been at the forefront of contemporary modeling and development techniques for many years. Whenever there is some time, updates on these topics are published on www.roelantvos.com/blog

Subscriber to DMZ mailing list

never miss our latest news and event updates

DMZ Australia

Sponsorship enquiries can be directed to:


+61 2 6161 0208
info@dmzaustralia.com

Follow Us

Like Us