Data Tech
Data Analyst
Being able to give meaning to data, in order to extract business or usage value from it, is the very essence of the role of Data Analysts at iPepper. In other words, being a Data Analyst is like being the Sherlock Holmes of the future, solving puzzles with data. Instead of following leads to find a criminal, they analyse and present data in a certain way to uncover hidden information. In essence, it helps businesses make more informed decisions by providing valuable insights.
For example, at a client start-up offering a personalisation solution for the resale of travel products to complement an airline ticket, one of the iPepper Data Analyst's first tasks was to identify the business potential of such a solution.
To do this, he used statistical analysis and data visualisation techniques from various travel booking platforms. This enabled useful and relevant information to be extracted and aggregated, enabling the end-to-end itinerary of travellers to be reconstructed. On this basis, it was possible to segment travel typologies and establish the right moments to promote travel products such as a transfer from the hotel to a meeting or to the airport. Later, our iPepper Data Analyst was able to use the information gathered to identify more precisely the factors that improve conversion rates for the resale of travel products, as well as the trends and anomalies that impact on business repercussions. In short, this illustration shows the key role that an iPepper Data Analyst can play.
In terms of technology, our Data Analysts work with analysis and visualisation tools such as Tableau and Power BI. SAS is also a statistical analysis and data visualisation software used mainly by major accounts. The use of certain packages from the R and Python family of programming languages, such as Phyton Panda, can be part of the toolbox required to facilitate the manipulation and analysis of data from applications based on these languages. Finally, mastery of good old Excel and SQL is still a must for successful Data Analyst assignments.
Data Scientist
Often a mathematical scientist at heart, the Data Scientist at iPepper helps predict the future using data! Imagine discovering the next bestseller using data on people's reading habits. Wow, isn't that amazing? In any case, this job is for people who like playing with numbers and want to solve complex problems that can make a difference to the world we live in. Do you like experimenting and using mathematical methods, statistical techniques and Artificial Intelligence tools? Would you like to implement intelligent predictive models with machine learning to optimise business and operational processes? Then the Data Scientists club is for you!
In fact, that's exactly what one of our Data Scientists is doing at a "data advertising-tech" start-up specialising in the creation of audiences for personalised advertising campaigns, while complying with new privacy regulations.
While the design and experimentation of AI models and, more generally, statistical techniques are essential, the ability to collect, clean, analyse, interpret and semantise data may also be required, especially when the company does not have a Data Analyst and a Data Engineer. As part of our Data Science assignments, you'll be working in Python, for example, to analyse and implement machine learning models. R and Go are also required by some clients. SQL remains a 'must' when it comes to accessing data stored in SQL databases such as PostgresSQL. However, the ability to work with NoSQL-type databases is also required. TensorFlow, Google's open-source library, is very popular with certain customers with deep learning and image recognition projects. The use of other types of open source machine learning libraries such as PyTorch and Scikit-learn, to name but a few, are also 'on the menu' for some assignments. In short, you will certainly find the same range of tools that an AI engineer uses, but with the difference that you are in charge of designing AI prediction models.
IA Engineer
The Artificial Intelligence Engineer is the Jedi of computing! He uses the power of the force (aka machine learning algorithms) to make machines more intelligent and better at interpreting human beings.
So, as an AI engineer at iPepper, you'll master machine learning techniques (aka supervised / unsupervised machine learning, deep learning, reinforced learning...) and natural language processing! The aim is to develop computer systems with the ability to reason, learn and adapt.
For example, for one of our start-up customers working in the field of urban mobility, our AI engineers are helping to design and develop an AI application that can be used to implement an optimal corrective action plan in the event of traffic disruption and technical incidents. This is achieved by implementing in-depth learning based on the history of different types of disruption.
As a result, the diversity of AI applications you can be brought in to develop is now almost infinite... Whether it's systems for speech recognition, vision or natural language understanding, AI has become the cornerstone for many of our customers. As a result, the applications that our customers submit to us are varied and cover all industries - whether they be voice-recognition virtual personal assistants, chatbots for customer service, facial-recognition security systems, self-driving mobility solutions, or solutions capable of interpreting complex legal documents.
Examples of deep learning frameworks that you might come across include TensorFlow, PyTorch, Caffe and Keras. As for libraries for mathematical operations, data processing and visualisations, NumPy, Scikit-learn, Matplotlib and Seaborn are some of the most commonly used tools. In short, there's a whole panoply of tools to be used, but you need to respect general ethical principles if you don't want to fall on the dark side of the force!
Data Engineer
The Data Engineer at iPepper is like a data DJ, mixing data to create music for Data Scientists and Data Analysts to dance to!
Your role, like any engineer, is to design and manufacture. However, rather than aircraft or buildings, you specialise in creating data bridges. Within our missions, your first responsibility is to collect raw data from multiple sources, such as data warehouses, SQL or NoSQL databases, right down to files containing data that is not always well structured.
Then it's up to you to 'clean up' this data in the noble sense of the word, transforming and structuring it. After the data bridges, you work to create data lakes. During these actions, you automate the various stages of data acquisition, from extraction to storage, so as to establish an optimum industrialised approach.
As part of these projects, you will be required to use various tools, such as Apache Spark, Apache Fling and Apache Storm for data processing. In particular, in terms of data extraction, mastery of certain ETL (Extract, Transform, Load) frameworks such as Stich is required. Similarly, the orchestration of this data may require the use of tools such as Apache Airflow, AWS Glue, Apache Nifi. The SQL databases often used are MySQL, PostgresSQL, Oracle, and in terms of NoSQL, we find Cassandra, MongoDB, Amazon DynamoDB, CouchDB...
In short, the Data Engineer is a master in the art of using a variety of tools to ensure that data is ready to be analysed and exploited by Data Scientists and Data Analysts.
Big Data Engineer
The Big Data Engineer is a "Gargantuan" Data Engineer, operating with a very, very large volume of data!
Analysing, managing and exploiting large quantities of data, while taking into account the problems of anonymisation and data security, is just one example of a mission that one of iPepper's engineers carried out within a start-up specialising in cybersecurity.
In this type of position, you will of course be using classic technologies such as Hadoop or Spark to process large volumes of data and build operable data lakes, but you will also be using analysis tools such as Hive, Pig and Impala to carry out queries and analyses on this data.
The need to deploy the streaming technology required for real-time processing is also becoming a major trend. Indeed, more and more iPepper client companies are interested in the ability to analyse large volumes of data in real time to make faster decisions or offer use cases that are highly contextualised to the needs of their end users/customers. As part of certain assignments, you may be asked to work on streaming-type technologies such as Apache Kafka or Apache Storm.
Blockchain Engineer
When it comes to protecting and authenticating sensitive information or high-value assets, deploying and exploiting blockchain technology with the help of a Blockchain Engineer is a choice increasingly adopted by some of our customers.
Fintech start-ups were the first, and some of them have democratised access to financial services, a real vector of inclusion in our modern societies. But more recently, we have been seeing increasingly varied demands. For example, the use of blockchain to ensure the traceability and authenticity of fine wines is a solution that one of iPepper's client start-ups has chosen to implement.
Blockchain is also the solution chosen by iPepper to implement an anonymised recruitment process via its Xtreme Profiler platform. As an iPepper blockchain engineer, you will design, develop and maintain private or public blockchain applications. You decentralise and automate business processes by developing "smart contracts". You will also work on integrating blockchain technology with other systems and ensuring that your smart contracts are secure.
You will be working on different platforms such as Ethereum and Hyperledger. But more than just a technical expert, the iPepper blockchain engineer is contributing to the Web 3.0 revolution.
Dev App
Dev Backend Engineer
The Dev Backend engineer is like being the invisible performer in a show. He hides behind the scenes, but it's thanks to him that everything runs smoothly!
In the case of data-intensive applications such as those we often develop for our iPepper customers, it's the engineer who integrates, orchestrates and presents the work of the data engineers, data scientists and AI engineers to the Front-End Dev engineers.
Software architects at heart, our Back-End engineers have mastered the principles of software engineering, such as Design Patterns, which enable highly scalable applications to be deployed. Our mastery of microservices architectures and intelligent agent orchestration illustrates just how sophisticated our business has become.
At iPepper, we like to say to our Back-End Dev Engineers: "the sky is the limit". In fact, the diversity of assignments and technologies is impressive. Whether it's applications for players in aeronautics, maritime, biotech, finance or the energy transition, our Dev Backend Engineers work to deliver applications that are state-of-the-art, scalable and highly secure. However, over the years iPepper has built up a battalion of back-end engineers who are experts in certain key languages such as Python, Java, C# and C++.
To speed up development, our back-end engineers use back-end or web app frameworks such as Flask, Django, Express, Spring, J2EE, .Net and sometimes even Ruby on Rails. The integration of SQL and NoSQL database management systems such as MySQL, MongoDB, SQL Server and PostgreSQL also holds no secrets for us.
Finally, setting up REST/Json or GraphQL APIs for interaction between the various components of an application or between different applications is part of the 'daily grind'. Ready to join the adventure?
Dev Frontend Engineer
At iPepper, the Frontend engineer is seen as a bit of an IT wizard! He uses his magic wand (HTML, CSS and JavaScript code) to conjure up wonders on the screen. He takes ideas (designs, functionalities, etc.) and transforms them into an incredible user experience.
They often work closely with the Dev Back-End engineers to ensure good integration between the visual aspects and the functionality of the site or application.
To speed up their work, they use modern Javascript libraries and frameworks such as React, Angular and Node.Js. What's more, designing a consistent and complementary multi-channel experience is at the heart of their requirements; whether on a web interface or a mobile interface, our Front-End engineers have mastered the art of making applications "fully responsive".
To deliver a fluid user experience, our Front End engineers also take care to improve the performance of web pages using techniques such as compression, caching and asynchronous requests.
What's more, thanks to the integration of artificial intelligence, our Front End Dev engineers also have to think about developing applications that are compatible with a multitude of use cases customised 'on the fly' for users.
Finally, as part of our fight for equal opportunities in the workplace, it is important for iPepper that our Front End engineers know how to implement websites and applications that are accessible to people with special needs, such as the visually impaired, the deaf or users with physical limitations.
DevOps
DevOps Engineer
First of all, the purists will tell you that DevOps is not a profession but an approach!
But at iPepper, we agree that a DevOps engineer is a true professional responsible for setting up and maintaining application integration, deployment and delivery systems. Your job? To ensure that the development phase (Continuous Integration) and the operations phase (Continuous Deployment) around applications are carried out as quickly as possible while ensuring the quality of the whole.
For example, one of our iPepper engineers is working for a customer on digital solutions for local authorities and transport operators to orchestrate local mobility! This involves developing the CI/CD toolchain and supporting Devs and Ops in implementing the DevOps culture. It also involves setting up POCs (Proof Of Concept) to improve production chain processes.
At the moment, DevOps people are being asked to master automation technologies. So you'll probably be using Ansible, Puppet or Chef to automate server configuration, and using cloud computing tools such as Amazon Web Services (AWS), Microsoft Azure or Google Cloud Platform (GCP) to deploy and manage applications.
Being a Devops engineer also means guaranteeing the scalability and isolation of applications. You will therefore need to know about or be trained in container management tools such as Docker and Kubernetes.
These days, it's important to ensure the performance of applications and systems in production. For this, you typically have Prometheus or Grafana to help you. Finally, with a view to monitoring systems, you're likely to be asked to use Naggios or Zabbix.
Data Ops Engineer
The Data Ops engineer at iPepper is a bit like a DevOps fan of data! Managing the operations and architecture of different types of database management systems, such as SQL and NoSQL, or even Big Data or streaming, form the foundations of their knowledge.
A Data Ops engineer is therefore generally defined as a professional who focuses on implementing and maintaining processes and tools to manage data effectively in an IT environment. So everything to do with data falls within their scope of expertise: management, quality, integration, analysis and visualisation. As a result, it's a job where you typically have to work closely with other data enthusiasts - data scientists, data engineers and data analysts!
We can give you the example of a leader in payment in France and Europe, positioned on retails and Fintech offers. Their Data Ops has been given the task of improving quality and speeding up data analysis and processing cycles, in order to constantly renew their service offering.
If you want to perform well in this job, you'll need to keep up with the trend towards automation. To do this, be prepared to master Jenkins or Ansible to automate continuous deployment and configuration management. You'll probably also be asked to set up data pipelines. To do this, do what one of our iPepper engineers did and use Apache Airflow to plan your workflow.
Otherwise, a growing number of data projects require data to be processed in real time so that decisions can be taken more quickly or business operations can be made more agile. With this in mind, we can recommend that you take a closer look at real-time data streaming technologies such as Apache Kafka. Quite classically, the Kubernetes container orchestrator is very popular. In fact, it's often used in conjunction with Prometheus for monitoring and alerts.
Pay attention to data security: cyber attacks and data breaches are on the increase.
Cloud computing services are used by many iPepper customers, so we really advise you to understand how to manage data in a cloud environment.
And if you want to become a data management and data quality star and have clients clamouring for your skills, take a look at what AI and machine learning can do for you.
System Ops Engineer
In the corridors of iPepper, the System Ops engineer is known as the "Boss of Prod". Capable of working on operating system virtualisation, he also has a good command of how operating systems such as Linux and Windows work. And of course, managing cloud infrastructures is part of his day-to-day job.
Naturally, they are responsible for setting up, maintaining and optimising IT systems in production!
If you're interested in this job, you know that you'll be working closely with the Dev and Ops teams to ensure the smooth flow of applications to production. In this job, you can expect to have a wide variety of tasks. Like systems monitoring, performance management, capacity management, backup and deployment management, and problem resolution. That's it!
In the aeronautics sector, one of our iPepper engineers is working on a platform for exchanging data between airlines and European governments. In this context, he relied on Prometheus to monitor the system, but he could just as easily have preferred Naggios or Zabbix!
A good systems ops engineer must optimise systems in production by ensuring their availability, reliability and performance. For this last point, we often talk about Prometheus, Grafana, Elastic Stack to collect, store and visualise performance data for applications and systems.
You also need to be prepared to become a security whiz by putting in place measures to protect data and applications from external threats. You shouldn't be allergic to problems, either; you should even anticipate them by setting up backup systems or devising emergency procedures.
You need to have the soul of a benevolent leader, because System Ops engineers are often asked to supervise all the production teams. So good communication and organisation are a must.
In terms of technology, there are trends and tools common to other professions. Automation first of all, with tools like Ansible, Puppet or Chef for server configuration. For the deployment and management of applications, we are automating a lot with cloud computing tools such as Amazon Web Services (AWS), Microsoft Azure or Google Cloud Platform (GCP) for deploying and managing applications. Speaking of the Cloud, many of iPepper's customers are turning to public Cloud services for scalability, flexibility and...cost. So it's best to be comfortable with Cloud services in general!
Have you heard of Infrastructure as Code? If not, get started. In a nutshell, it's the practice of describing software infrastructure in the form of code, making it easy to reproduce, efficiently document and automate. Tools such as Terraform, Ansible and CloudFormation enable infrastructure to be managed in this way. Finally, to run applications efficiently and gain scalability, knowing how to use containerisation leaders like Docker and Kubernetes correctly is a good idea!
ML Ops Engineer
If we had to sum up what an ML Ops engineer is, we could tell you that he or she is a Machine Learning expert who has mastered the DevOps approach.
These are highly sought-after skills for any project involving Machine Learning-oriented applications!
You need to deploy and maintain machine learning models in production reliably and efficiently! And to do that, you need to adopt a DevOps approach, which means taking deployment constraints into account right from the model design and training stage.
At iPepper, one of our engineers is working on the platform of a client who is an expert in the exchange of consumer data between companies to offer relevant promotions and services to potential buyers.
On a day-to-day basis, the ML Ops engineer is asked to set up learning pipelines combined with model monitoring tools.
In terms of tools, there's a lot of talk about DataRobot, which will enable you to cover the entire ML lifecycle, i.e. from preparing learning data sets to deploying models and training them. It's a serious competitor to Google's Auto ML, especially as it's a no-code offering! But we could also mention Domino Data, ML flow for Windows or Kubeflow if you're planning to deploy on Kubernetes.
If you want to make your life easier and copy Netflix, you could also take a closer look at MetaFlow, a Python framework that facilitates the execution of machine learning projects from the prototype stage right through to production. You can score points by knowing how to use feature stores! These are building blocks that allow you to store, update, retrieve and share machine learning features with other data experts like yourself, in order to save time. You'll also need to have a good command of continuous integration and delivery (CI/CD) tools like Jenkins.