Data Lake Engineer (m/f)

  • Die Raiffeisen Bank International AG

    sucht nach einen Data Lake Engineer (m/f) in Wien.


    Für mehr Informationen und Bewerbung hier klicken.

    What you can expect:

    • Work with data and analytics experts to strive for greater functionality in our data systems.
    • Collaborate across the enterprise to enable and share best practices and reusable and scalable tools and code for our analyst community.
    • Design, use and test the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies (DevOps & Continuous Integration); Build data integration from various sources and technologies to the data lake infrastructure as part of an agile delivery team.
    • Drive the advancement of RBI data infrastructure by designing and implementing the underlying logic and structure for how data is set up, cleansed, and ultimately stored for organizational usage.
    • Assemble large, complex data sets that meet functional / non-functional business requirements.
    • Identify, design, and implement internal process improvements: automating manual processes, optimizing data deliveries, re-designing infrastructure for greater scalability.
    • Build and select analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
    • Monitor the capabilities and react on unplanned interruptions ensuring that environments are provided & loaded in time.
    • Manage incidents, reported by data deliverers or data consumers and provide service reporting
    • End to End responsibility for tasks from assignment to completion.

    What you bring to the table:

    • Adequate technical education (Technical School or suitable University degree).
    • About 3 years of experience in implementing integration components and applications, especially in the area of big data projects or systems integration.
    • 2 or more years of Hadoop experience (Hortonworks and Cloudera preferred, AWS Cloud as advantage) in building data ingestion and transformation (Knowledge about database components such as ETL, Relational Database Management Systems, BI Tools, Data).
    • Hands-on experiences in data discovery, blending data and data cleansing for analytical purposes from various sources of data (e.g. internal data warehouses, weblogs, social media, data market providers).
    • Experience supporting and working with cross-functional teams in a dynamic environment.
    • Strong analytic skills related to working with unstructured datasets.
    • Profound experience with CD / DevOps methodology and good overview of related tools or tool chains.
    • Demonstrated ability to build processes that support data transformation, data structures, metadata, dependency and workload management (Mining Tools, Analytic Applications, Metadata Management Advantageous).
    • Practical experience in managing, monitoring and support of an application including job scheduling (e.g. UC4).
    • Combined technical ability, business acumen, and an excellent understanding of industry standards and technology trends.
    • Fluent knowledge of English; German is appreciated, but not mandatory.

    What we offer:

    • You’ll work in an international team at a leading bank
    • You’ll benefit from flexible working arrangements and determine your own work-life balance
    • You’ll benefit from the very latest in tailored professional development
    • You’ll earn an appropriate salary starting at EUR 46.500,- gross p.a. excluding overtime

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!