How to configure kafka cluster for batch processing -
i trying provide number of new rows db(consumer) using kafka connect. i've configured source config file as
this how source.properties looks:
name=source-postgres connector.class=io.confluent.connect.jdbc.jdbcsourceconnector tasks.max=1 batch.max.rows = 10 connection.url=jdbc:postgresql://<url>/postgres?user=postgres&password=post mode=timestamp+incrementing timestamp.column.name=updated_at incrementing.column.name=id topic.prefix=postgres_
this content of sink property file
name=dbx-sink batch.size=5 connector.class=io.confluent.connect.jdbc.jdbcsinkconnector tasks.max=1 # topics consume - required sink connectors 1 topics=postgres_users # configuration specific jdbc sink connector. # want connect sqlite database stored in file test.db , auto-create tables. connection.url=jdbc:postgresql://<url>:35000/postgres?user=dba&password=nopasswd auto.create=true
but doesn't have effect, whenever new row available it's getting inserted db (consumer). so, added config parameter sink batch.size=10
. has no effect.
when starting connect-standalone.sh script can see batch.max.rows = 10 on console.
what doing wrong or how fix it?
batch.max.rows
send 10 rows per batch; won't limit number of rows sent in total.
Comments
Post a Comment