Sharath Kumar Thungathurthisharaththungathurthi.hashnode.dev·Nov 14, 2024Managed vs External TablesIn an interview, questions about managed vs. external tables in PySpark are likely to focus on concepts, practical applications, and potential scenarios where one is preferable over the other. Here are some areas to prepare for: 1. Definition and Dif...PySpark
Sharath Kumar Thungathurthisharaththungathurthi.hashnode.dev·Oct 19, 2024How to Perform Efficient Data Transformations Using PySparkHere are some common interview questions and answers related to transformations in Spark: 1. What are narrow and wide transformations in Spark? Answer: Narrow transformations are transformations where each partition of the parent RDD is used to produ...36 readspyspark transformations
Sharath Kumar Thungathurthisharaththungathurthi.hashnode.dev·Oct 19, 2024Unlock PySpark’s Power: Techniques for ParallelizingConceptual Questions What is parallelize in PySpark? parallelize is a method in PySpark used to convert a Python collection (like a list or a tuple) into an RDD (Resilient Distributed Dataset). This allows you to perform parallel processing on the ...1 likePySpark
Vaishnave Subbramanianvaishnave.page·Sep 29, 2024Sparking SolutionsIntroduction to Spark Optimization Optimizing Spark can dramatically improve performance and reduce resource consumption. Typically, optimization in Spark can be approached from three distinct levels: cluster level, code level, and CPU/memory level. ...1 like·262 readsDabbling with Apache Sparkspark
Harvey Ducayhddatascience.tech·Sep 25, 2024Installing dependencies for learning PySparkI’ve had a lot of issues in downloading pyspark locally and with all the support from forums online such as stackoverflow, etc. I still wasn’t able to fix my dependency issues from running PySpark. I said to myself, maybe this is the time I start uti...PySpark
Kilian Baccaro Salinasdatagym.es·Sep 22, 2024Ejecutando notebooks en paralelo usando runMultipleNotebookUtils (aka MSSparkUtils) en Microsoft Fabric es como tu caja de herramientas secreta para trabajar con notebooks. En esta ocasión nos centraremos en las utilidades de cuaderno runMultiple(), que te permite ejecutar notebooks en paralelo. La A...PySpark
Vishal Barvaliyavishalbarvaliya.hashnode.dev·Sep 18, 2024Why Does the "Executor Out of Memory" Error Happen in Apache Spark?Apache Spark is a tool used to process large amounts of data. It’s fast, scalable, and great for big data tasks. However, sometimes when working with Spark, you might run into a common issue: the "Executor Out of Memory" error. If you've seen this er...9 likes#apache-spark
Vishal Barvaliyavishalbarvaliya.hashnode.dev·Sep 17, 2024KPMG Pyspark interview questions for Data Engineer 2024.Image Source How do you deploy PySpark applications in a production environment? What are some best practices for monitoring and logging PySpark jobs? How do you manage resources and scheduling in a PySpark application? Write a PySpark job to per...kpmg
Rahul Dasschemasensei.hashnode.dev·Aug 31, 2024Getting Started with PySparkApache Spark is a powerful distributed computing framework commonly used for big data processing, ETL (Extract, Transform, Load), and building machine learning pipelines. It supports various programming languages, including Scala, Java, and Python, m...2 likes·35 readsspark
Freda Victorlearndataengineering.hashnode.dev·Aug 23, 2024Building a Weather Dashboard: A Data Engineer’s Journey from Lagos to IbadanDiscover the journey of building a weather dashboard from Lagos to Ibadan. Learn how to collect, transform, and visualize real-time weather data using PySpark, Streamlit, and Plotly. Perfect for data enthusiasts and engineers! 🌦️ Introduction Weathe...1 likedata-engineering