Constantin Lungudatawise.dev·Feb 8, 2025Using EXISTS with LOGICAL_OR in BigQueryLong time, no see! Here's a quick SQL exercise that illustrates some important modern concepts. So, we're given a list of updates per each order, and at each point in time we have some flags. Our goal here is to check for each order if there was any ...Practical SQLbigquery
Umairumair-blogs.hashnode.dev·Feb 8, 2025Snowflake-Loading-DataBusiness Overview Snowflake's Data Cloud is based on a cutting-edge data platform delivered as a service(SaaS). Snowflake provides data storage, processing, and analytic solutions that arequicker, easier to use, and more versatile than traditional op...data engineering projects
Umairumair-blogs.hashnode.dev·Feb 8, 2025Real Estate Data Pipeline with AWS, Airflow, Snowflake & Power BIThis project implements a scalable data pipeline to extract, transform, and load real estate data from Redfin into Snowflake using AWS services. The data is later visualized in Power BI to provide insights into real estate trends. Overview The pipeli...data engineering projects
Umairumair-blogs.hashnode.dev·Feb 8, 2025Real-Time-Data-Pipeline-with-AWS-NiFi-and-SnowflakeProject: Slowly Changing Dimensions in Snowflake Using Streams and Tasks Introduction This project implements a real-time data pipeline for continuous data ingestion and transformation into a Snowflake data warehouse. It leverages various cloud techn...data engineering projects
Umairumair-blogs.hashnode.dev·Feb 8, 2025Event-Driven Architecture Project with AWS ServicesThis project demonstrates an event-driven architecture using AWS services to build a real-time data processing pipeline. The architecture is designed to process events generated in an S3 bucket and uses services like SNS, SQS, and Lambda for reliable...data engineering projects
Umairumair-blogs.hashnode.dev·Feb 8, 2025DataFlow InsightsDataFlow Insights is an automated data pipeline project designed to streamline data ingestion, processing, and visualization using AWS services. This project pushes daily data to Amazon S3, automatically crawls and catalogs it using AWS Glue, queries...#aws projects
Umairumair-blogs.hashnode.dev·Feb 8, 2025Serverless Data Lake Architecture with AWSThis project demonstrates a serverless data lake architecture using various AWS services. The architecture is designed to ingest, process, and analyze CSV files stored in an Amazon S3 bucket. AWS Lambda functions, Glue Crawlers, Glue Jobs, CloudWatch...umair
Rohail Iqbalrohail-iqbal.hashnode.dev·Feb 6, 2025Stored Procedures vs. Modern Data PlatformsFor years, stored procedures have been a go-to solution for handling business logic inside databases. They help automate tasks, improve performance, and ensure security. But as data systems grow more complex, stored procedures have started showing th...37 readsCloudDataSolutions
Dineshdinesh-dezoomcamp.hashnode.dev·Feb 5, 2025Mastering Workflow Orchestration: My Module 2 Journey in Data Engineering ZoomcampIntroduction In the second week of Data Engineering Zoomcamp, I dove deep into workflow orchestration using Kestra, a modern and powerful orchestration tool. The journey from handling raw CSV files to implementing automated data pipelines was both ch...#dezoomcamp
Chandrasekar(Chan) Rajaramcr88.hashnode.dev·Feb 4, 2025Building a Parameterized Full & Incremental Load Pipeline in Azure Data FactoryIn today’s dynamic data landscape, building a dynamic reusable ETL pipelines is essential. In this blog post, we will see how to build a parameterized Azure Data Factory (ADF) pipeline that supports both full and incremental loads using a metadata-dr...30 readsfull-load